REISSUE OFFICE ACTION
The present application is being examined under the pre-AIA first to invent provisions.
This is a reissue office action for US Patent 9,131,257, which included original patent claims 1–56. Applicant requested amendment of the claims on 10/20/2025. Claims 101–105, 111–115 and 125–128 are pending.
Declaration and Reason for Reissue
This Reissue has been filed pursuant to the original patent being at least partly inoperative or invalid by reason of “claiming more or less than he had the right”, specifically:
“Applicant claimed less than that to which they are entitled. This is a broadening reissue.
An error arose in the transmitting operation recited in independent claim 1 of the patent. This operation includes “transmitting requested video images to the at least one of the respective one of the plurality of subscribers and a plurality of authorized viewers.” As a result, claim 1 is too narrow to protect the disclosed invention.
In addition, an error arose in the wherein clause in independent claim 28 of the patent. This clause includes “wherein at least one viewing device accesses the image data corresponding to the at least one video capture device via a custom GUI application on the viewing device.” As a result, claim 28 is too narrow to protect the disclosed invention.
In addition, an error arose in the wherein clause in independent claim 52 of the patent. This clause includes “wherein the receiving means receives control signals so that the viewing devices can independently customize viewing orientation of the image data.” As a result, claim 52 is too narrow to protect the disclosed invention.
Claims 1, 28 and 52 of the reissue application correct this error by omitting at least one element included respectively in claims 1, 28 and 52 of US Patent 9,131,257. In addition, claims 3, 10, 17, 40, 42, 53, 55, which are currently pending in the reissue application, variously include the following elements, at least one of which is not recited by originally issued claims:
• transmitting requested video images to the at least one of the respective one of the plurality of subscribers and a plurality of authorized viewers; (see claim 3 as amended)
• wherein the server performs image processing operations on the stored video images offline, wherein the image processing comprises at least one of user control operations of geometric distortion correction, optical distortion correction, flip, rotation, pan, tilt, and zoom operations, wherein each of plural viewers can independently view a customized video image; (see claim 10 as amended)
• wherein at least one video capture device is equipped with an image processing unit, wherein the image processing unit includes at least one of user control operations of geometric distortion correction, optical distortion correction, rotation, flip, pan, tilt, and zoom operations; (see claim 17 as amended)
• wherein at least one video capture device is equipped with at least one of motion sensor and sound sensor to trigger video transmission and is programmed to go dormant when no motion is detected for a period of time; (see claim 40 as amended).
• wherein at least one video capture device is equipped with infrared imagery means and/or includes means to record and store portions of streamed video; (see claim 42 as amended)
• wherein the receiving means receives control signals so that the viewing devices can independently customize viewing orientation of the image data, including by customizing the viewing orientation includes applying at least one of geometric distortion correction, optical distortion correction flip, pan, tilt and zoom operations; (see claim 53 amended).
• wherein at least one of the video capture devices includes a wide-angle lens, a plurality of lenses capable of up to 360° image capture, or a fisheye lens, (see claim 55 as amended).” (5/3/2018 declaration p. 5).
In addition, applicant provided supplemental remarks concerning an error:
“An error arose in the “controlling” operation recited in independent claim 1 of U.S. Patent No. 9,131,257. The “controlling” operation includes “controlling access to the video images through the network according to instructions from the respective one of the plurality of subscribers.” Claim 1 of U.S. Patent No. 9,131,257 is too narrow to protect the disclosed invention, and the Applicant claimed less than that to which the Applicant is entitled in U.S. Patent No. 9,131,257. This is a broadening reissue.” (9/4/2019 Remarks, p. 12).
Claim Interpretation
The following is a quotation of (pre-AIA ) 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim
112 6th or 112(f) Phrase
Potential Corresponding Structure
101, 111, 115
[video capture device equipped with] cellular communication means
“the communication link between the camera 120 and at least portions of the communication network is preferably of wireless form … for instance … cellular” (4:40–44).
105
[viewing device is equipped with] cellular communication means
“It could be a mobile device such as a personal laptop, a cellular device, tablet, etc., generally referred to as a PDA” (4:28–30).
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 101, 105, 111, 113, 115 and 125–126 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over US 8,675,071 (Slavin) in view of US 2010/0333155 (Royall).
101. A method comprising: detecting motion within a view range of a video capture device equipped with image sensing, a speaker, a light, networking, and cellular communication means, and
“Techniques are described for video monitoring and alarm verification technology” (Slavin 1:32–33).
See FIG. 2.
“a motion sensor (e.g., a Passive Infrared Motion detector) included in the camera 115” (Slavin 3:15–17).
“the camera 230 triggers integrated or external illuminators (e.g., Infra Red, Z-wave controlled “white” lights, lights controlled by the module 222, etc.) to improve image quality when the scene is dark” (Slavin 8:10–13).
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
“The network module 214 is a communication device configured to exchange communications over the network 205…a wireless data channel and a wireless voice channel. In this example, the network module 214 may transmit alarm data over a wireless data channel and establish a two-way voice communication session over a wireless voice channel. The wireless communication device may include one or more of a GSM module, a radio modem, cellular transmission module” (Slavin 6:63–7:8).
The camera and control unit together represent a video capture device.
Slavin does not appear to explicitly teach that the video capture device is equipped with a speaker. However, Slavin recognizes that home security alarm systems were known to generate an audible alert for a detected security breach:
“In response to an alarm system detecting a security breach, the alarm system may generate an audible alert” (Slavin 1:23–25).
It would have been obvious to one of ordinary skill at the time of the invention to have equipped the video capture device of Slavin with a speaker so that an audible alert can be triggered upon detection of a potential security breach. One of ordinary skill would have recognized that this would encourage an intruder to leave the premises once detected.
the video capture device includes a memory for backup and is configured to record image data into the memory when network connection disruptions occur; recording, by the video capture device, the image data into the memory or sending the image data, in response to the motion being detected; starting, by the video capture device, recording of the image data into the memory in response to network connection disruptions; stopping recording the image data into the memory or stopping sending the image data, in response to the motion being not detected; enabling, by the video capture device, a stop of recording of the image data in response to network reconnection;
Regarding the motion detection:
“The camera 115 also may begin capturing video and initiate establishment of the connection with the mobile phone 130 based on the user 110 triggering a motion sensor (e.g., a Passive Infrared Motion detector) included in the camera 115. The camera 115 further may begin capturing and locally storing video based on the security system panel 120 detecting the door opening event or trigger of its own internal motion sensor and then initiate establishment of the connection with the mobile phone 130 based on the security system panel 120 detecting the alarm condition” (Slavin 3:13–22).
“a Passive Infra Red (PIR) motion sensor may be built into the camera 230 and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 also may include a microwave motion sensor built into the camera and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors 220, PIR, door/window, etc.) detect motion or other events. In some implementations, the camera 230 receives a software command to capture an image when external devices detect motion. The camera 230 may receive the software command from the controller 212 or directly from one of the sensors 220” (Slavin 7:62–8:9).
Slavin’s discussion of a “normally open” digital input that can trigger capture of images when the motion sensor detects a motion event indicates that the capture of images is started in association with the event and its initiation and is stopped in association with the event and its conclusion. In this manner the system is able to record each detected event and does not simply record forever upon the first detected event.
Regarding the network disruption:
Slavin’s video capture device includes a local memory into which is image data is stored. This capability exists even during network disruptions. Having image stored such that it can be later retrieved inherently provides a backup. However, Slavin does not record video locally in response to network disruptions.
Royall “relates to the selective use of non-volatile storage in conjunction with transmission of content” (¶ 0002) and in particular teaches using local camera storage as a backup during network disruptions:
“a video camera may send video data to a server or other client. If the network becomes unavailable, the camera will store the video in a local flash memory and when the network becomes available, the camera can transmit the video from the flash memory to the server” (Royall at Abstract).
“a network may be malfunctioning or a client computing device may be busy with another tasks. In these instances, it is important that the content to be transferred is not lost” (Royall ¶ 0005).
“while the network is functional that content can be successfully streamed to the destination. If the network becomes unavailable, then the content is stored in local non-volatile storage system until the network becomes available. When the network becomes available, the content on the non-volatile storage system will be transmitted to the destination in addition to newly created content” (Royall ¶ 0025).
“non-volatile storage 206 is a flash memory card that can be inserted and removed from computer 204. Example formats for flash memory cards include Compact Flash, Smart Media, SD cards, mini SD cards, micro SD cards, memory sticks, XD carsd, as well as other formats” (Royall ¶ 0034).
Royall therefore teaches starting to store content locally at the camera storage as a response to a network disruption. This accomplishes a backup operation. Royall specifically teaches storing content locally “until the network becomes available” (at ¶ 0025) which represents the stopping of the local recording as a response to a network reconnection.
It would have been obvious to one of ordinary skill at the time of the invention to have modified Slavin such that local storage of content is started when the network becomes unavailable and stopped when it becomes available again, such that this stored content can serve as a backup. One of ordinary skill would have recognized the benefit of doing so in the context of Slavin’s operation – content would not be lost in the instance of network malfunction. The content could be subsequently made available from the locally stored backup.
receiving the image data from the video capture device via at least one cellular communication link to a network; managing the received image data and enabling at least one viewing device to interact with the image data sent from the video capture device over the network,
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
“The monitoring application server 260 may store sensor and image/video data received from the monitoring system…The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal)” (Slavin 8:49–58).
“The system 200 detects that received image data has not been acknowledged within a first period of time from receipt (710). For instance, the system 200 determines that, although the image data has been received by a device, a user of the device has not acknowledged the image data in anyway” (Slavin 19:67–20:5).
“The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640. The mobile phone 640 also displays a list of virtual buttons 642 and 643 that the mom user can activate to initiate sharing of the live video to one or more other devices. The mobile phone 640 further displays a start button 645 and a stop button 646 that the mom user can activate to control recording of the live video on electronic storage of the mobile phone 640” (Slavin 17:25–33).
the at least one viewing device being temporarily allowed access to the image data over the network without being an authorized user; and
“the one or more user devices 240, 250 initiate the sharing connection based on a user input command entered by a user after reviewing video or image data from the camera 230. In these implementations, the one or more user devices 240, 250 may provide the one or more third party devices 270, 280 with information needed to establish the sharing connection in response to the user input command. For instance, the information may include a link that opens a portal that displays a shared image of a portal screen (e.g., a customer web/mobile portal screen) that allows the user operating the user device to control what data/video/image is shared with the recipient operating the third party device. The information also may include credentials, such as a password, a machine token, etc. that the third party device can use to be authenticated to the user device. The credentials may be temporary or one-time access credentials to prevent the third party device from using the credentials to gain access to the monitoring system at a later date” (Slavin 12:22–39).
transmitting requested image data to the at least one viewing device,
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
“The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640” (Slavin 17:25–27).
wherein the at least one viewing device accesses the image data corresponding to the video capture device via a custom GUI application on the viewing device.
“The one or more user devices 240, 250 are devices that host and display user interfaces. For instance, the user device 240 is a mobile device that hosts one or more native applications (e.g., the native surveillance application 242)” (Slavin 8:64–67).
105. The method of claim 101, wherein the at least one viewing device is equipped with cellular communication means and is connected to a cellular network.
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
“the user device 240 is a mobile device that hosts one or more native applications (e.g., the native surveillance application 242). The user device 240 may be a cellular phone)” (Slavin 8:65–9:1).
111. A system comprising: a video capture device equipped with image sensing, a speaker, a light, networking, and cellular communication means, and
See FIG. 2.
“a motion sensor (e.g., a Passive Infrared Motion detector) included in the camera 115” (Slavin 3:15–17).
“the camera 230 triggers integrated or external illuminators (e.g., Infra Red, Z-wave controlled “white” lights, lights controlled by the module 222, etc.) to improve image quality when the scene is dark” (Slavin 8:10–13).
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
“The network module 214 is a communication device configured to exchange communications over the network 205…a wireless data channel and a wireless voice channel. In this example, the network module 214 may transmit alarm data over a wireless data channel and establish a two-way voice communication session over a wireless voice channel. The wireless communication device may include one or more of a GSM module, a radio modem, cellular transmission module” (Slavin 6:63–7:8).
The camera and control unit together represent a video capture device.
Slavin does not appear to explicitly teach that the video capture device is equipped with a speaker. However, Slavin recognizes that home security alarm systems were known to generate an audible alert for a detected security breach:
“In response to an alarm system detecting a security breach, the alarm system may generate an audible alert” (Slavin 1:23–25).
It would have been obvious to one of ordinary skill at the time of the invention to have equipped the video capture device of Slavin with a speaker so that an audible alert can be triggered upon detection of a potential security breach. One of ordinary skill would have recognized that this would encourage an intruder to leave the premises once detected.
the video capture device includes a memory for backup and is configured to record image data into the memory when network connection disruptions occur, wherein the video capture device is configured to detect motion within a view range of the video capture device, record the image data into the memory or send the image data, in response to the motion being detected, start recording of the image data into the memory in response to network connection disruptions, stop recording the image data into the memory or stop sending the image data, in response to the motion being not detected; and enable a stop of recording of the image data in response to network reconnection;
Regarding the motion detection:
“The camera 115 also may begin capturing video and initiate establishment of the connection with the mobile phone 130 based on the user 110 triggering a motion sensor (e.g., a Passive Infrared Motion detector) included in the camera 115. The camera 115 further may begin capturing and locally storing video based on the security system panel 120 detecting the door opening event or trigger of its own internal motion sensor and then initiate establishment of the connection with the mobile phone 130 based on the security system panel 120 detecting the alarm condition” (Slavin 3:13–22).
“a Passive Infra Red (PIR) motion sensor may be built into the camera 230 and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 also may include a microwave motion sensor built into the camera and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors 220, PIR, door/window, etc.) detect motion or other events. In some implementations, the camera 230 receives a software command to capture an image when external devices detect motion. The camera 230 may receive the software command from the controller 212 or directly from one of the sensors 220” (Slavin 7:62–8:9).
Slavin’s discussion of a “normally open” digital input that can trigger capture of images when the motion sensor detects a motion event indicates that the capture of images is started in association with the event and its initiation and is stopped in association with the event and its conclusion. In this manner the system is able to record each detected event and does not simply record forever upon the first detected event.
Regarding the network disruption:
Slavin’s video capture device includes a local memory into which is image data is stored. This capability exists even during network disruptions. Having image stored such that it can be later retrieved inherently provides a backup. However, Slavin does not record video locally in response to network disruptions.
Royall “relates to the selective use of non-volatile storage in conjunction with transmission of content” (¶ 0002) and in particular teaches using local camera storage as a backup during network disruptions:
“a video camera may send video data to a server or other client. If the network becomes unavailable, the camera will store the video in a local flash memory and when the network becomes available, the camera can transmit the video from the flash memory to the server” (Royall at Abstract).
“a network may be malfunctioning or a client computing device may be busy with another tasks. In these instances, it is important that the content to be transferred is not lost” (Royall ¶ 0005).
“while the network is functional that content can be successfully streamed to the destination. If the network becomes unavailable, then the content is stored in local non-volatile storage system until the network becomes available. When the network becomes available, the content on the non-volatile storage system will be transmitted to the destination in addition to newly created content” (Royall ¶ 0025).
“non-volatile storage 206 is a flash memory card that can be inserted and removed from computer 204. Example formats for flash memory cards include Compact Flash, Smart Media, SD cards, mini SD cards, micro SD cards, memory sticks, XD carsd, as well as other formats” (Royall ¶ 0034).
Royall therefore teaches starting to store content locally at the camera storage as a response to a network disruption. This accomplishes a backup operation. Royall specifically teaches storing content locally “until the network becomes available” (at ¶ 0025) which represents the stopping of the local recording as a response to a network reconnection.
It would have been obvious to one of ordinary skill at the time of the invention to have modified Slavin such that local storage of content is started when the network becomes unavailable and stopped when it becomes available again, such that this stored content can serve as a backup. One of ordinary skill would have recognized the benefit of doing so in the context of Slavin’s operation – content would not be lost in the instance of network malfunction. The content could be subsequently made available from the locally stored backup.
a server configured to receive the image data from the video capture device via at least one communication link to the network, manage the received image data and to enable at least one viewing device to interact with the image data sent from the video capture device over the network,
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
“The monitoring application server 260 may store sensor and image/video data received from the monitoring system…The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal)” (Slavin 8:49–58).
“The system 200 detects that received image data has not been acknowledged within a first period of time from receipt (710). For instance, the system 200 determines that, although the image data has been received by a device, a user of the device has not acknowledged the image data in anyway” (Slavin 19:67–20:5).
“The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640. The mobile phone 640 also displays a list of virtual buttons 642 and 643 that the mom user can activate to initiate sharing of the live video to one or more other devices. The mobile phone 640 further displays a start button 645 and a stop button 646 that the mom user can activate to control recording of the live video on electronic storage of the mobile phone 640” (Slavin 17:25–33).
the at least one viewing device being temporarily allowed access to the image data over the network without being an authorized user, and
“the one or more user devices 240, 250 initiate the sharing connection based on a user input command entered by a user after reviewing video or image data from the camera 230. In these implementations, the one or more user devices 240, 250 may provide the one or more third party devices 270, 280 with information needed to establish the sharing connection in response to the user input command. For instance, the information may include a link that opens a portal that displays a shared image of a portal screen (e.g., a customer web/mobile portal screen) that allows the user operating the user device to control what data/video/image is shared with the recipient operating the third party device. The information also may include credentials, such as a password, a machine token, etc. that the third party device can use to be authenticated to the user device. The credentials may be temporary or one-time access credentials to prevent the third party device from using the credentials to gain access to the monitoring system at a later date” (Slavin 12:22–39).
transmit requested image data to the at least one viewing device,
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
“The mobile phone 640 of the mom user includes a video display area 641 that displays the live video captured by the first camera 610 on the mobile phone 640” (Slavin 17:25–27).
wherein the at least one viewing device accesses the image data corresponding to the video capture device via a custom GUI application on the viewing device.
“The one or more user devices 240, 250 are devices that host and display user interfaces. For instance, the user device 240 is a mobile device that hosts one or more native applications (e.g., the native surveillance application 242)” (Slavin 8:64–67).
113. The system of claim 111, wherein the at least one viewing device is a handheld device.
“the user device 240 is a mobile device that hosts one or more native applications (e.g., the native surveillance application 242). The user device 240 may be a cellular phone)” (Slavin 8:65–9:1).
115. The system of claim 111, wherein the at least one viewing device is equipped with cellular communication means and is connected to a cellular network.
“the user device 240 is a mobile device that hosts one or more native applications (e.g., the native surveillance application 242). The user device 240 may be a cellular phone)” (Slavin 8:65–9:1).
“the system 200 further includes network 205 and the sensors 220, the module 222, and the camera 230 are configured to communicate sensor and image/video data to the one or more user devices 240, 250 over network 205 (e.g., the Internet, cellular network, etc.)” (Slavin 10:48–52).
125. The method of claim 101, wherein the stopping recording or the stopping sending is performed by programming.
“Apparatus implementing these techniques may include appropriate input and output devices, a computer processor, and a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor” (Slavin 22:20–24).
“The camera 230 may be triggered by several different types of techniques. For instance, a Passive Infra Red (PIR) motion sensor may be built into the camera 230 and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 also may include a microwave motion sensor built into the camera and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors 220, PIR, door/window, etc.) detect motion or other events. In some implementations, the camera 230 receives a software command to capture an image when external devices detect motion. The camera 230 may receive the software command from the controller 212 or directly from one of the sensors 220” (Slavin 7:61–8:9).
126. The system of claim 111, wherein the video capture device is programmed to stop recording the image data into the memory or stop sending the image data.
“Apparatus implementing these techniques may include appropriate input and output devices, a computer processor, and a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor” (Slavin 22:20–24).
“The camera 230 may be triggered by several different types of techniques. For instance, a Passive Infra Red (PIR) motion sensor may be built into the camera 230 and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 also may include a microwave motion sensor built into the camera and used to trigger the camera 230 to capture one or more images when motion is detected. The camera 230 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors 220, PIR, door/window, etc.) detect motion or other events. In some implementations, the camera 230 receives a software command to capture an image when external devices detect motion. The camera 230 may receive the software command from the controller 212 or directly from one of the sensors 220” (Slavin 7:61–8:9).
Claim 114 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Slavin and Royall in view of US 20060221183 (Sham).
114. The system of claim 111, wherein the video capture device includes a fisheye lens.
Slavin does not describe the particular lens for the camera. Sham teaches a security system with a video camera which includes a fisheye lens:
“The invention relates to security surveillance and monitoring systems, particularly to an assembly for installing and maintaining a camera and microphone at the entrance door of a house or at an interior door, with video and audio monitoring and recording capabilities. In recent years, surveillance systems have been used for monitoring a visitor at the entrance door of a house” (Sham ¶ 0003).
“The angle of the peephole lens 8 depends on the image sensor of the camera 1, the distance from the camera 1 to the peephole lens 8, the type of lens employed as the peephole lens (preferably a wide angle or fish-eye type lens) and the desired angle of viewing at the exterior of the premises to be monitored” (Sham ¶ 0042).
It would have been obvious to one of ordinary skill to have employed such a lens, including a fish eye, with the system of Slavin in order to provide a wide angle of viewing at (for example) a front door of the premises being monitored.
Claims 127–128 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Slavin and Royall in view of US 2010/0195810 (Mota).
127. The method of claim 101, further comprising: detecting sound; recording the image data into the memory or sending the image data, in response to the sound being detected; stopping recording the image data into the memory or stopping sending the image data, in response to the sound being not detected.
Slavin teaches video capture to be triggered on detected motion events, but not on detected sound events. Mota teaches a security system with a video camera which is triggered on detected motion and/or sound events:
“a security system having a camera, a sensor generating a signal in response to a triggering event, and a management module. The triggering event is one of…sound detection, motion detection…The management module is adapted to send data to be received by a remote communication device upon generation by the sensor of the signal generated in response to the triggering event” (Mota ¶ 0014).
It would have been obvious to one of ordinary skill at the time of the invention to have included a sound detection sensor with that of Slavin in order to trigger video in response to sounds. Doing so would allow the users of Slavin to investigate the monitored area in the event of a suspicious sound.
As stated above, Slavin’s capture of images is started in association with the detected event and its initiation and is stopped in association with the event and its conclusion. Triggering of video capture based on a detected sound event for the combination would likewise start and stop video capture in association with individual sound events.
128. The system of claim 111, wherein the video capture device is configured to detect sound; record the image data into the memory or send the image data, in response to the sound being detected; stop recording the image data into the memory or stop sending the image data, in response to the sound being not detected.
See claim 127.
Claims 102 and 112 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Slavin and Royall in view of US 2007/0024706 (Brannon).
102. The method of claim 101, wherein each of a plurality of viewing devices customizes viewing by independently applying a set of user control instructions to the image data corresponding to the video capture device.
Slavin does not describe independent user control of the image data in order to independently view portions of the video image.
Brannon however also teaches a video distribution system where client viewers may each independently control aspects of the received video event scene for customized viewing/display:
“Systems and methods for providing high-quality region of interest (HQ-ROI) viewing within an overall scene by enabling one or more HQ-ROIs to be viewed in a controllable fashion” (Brannon at ABSTRACT).
Brannon notes the disadvantage with conventional video distribution where changing viewing parameters affects all viewers. Brannon further teaches improvements where each viewer may issue control commands to manipulate their own custom view of the event without affecting others:
“The disclosed systems and methods may be implemented in one embodiment to enable optimized simultaneous viewing of multiple video sources for each individual viewing client. This is in contrast to conventional video viewing systems . . . standard single-stream camera sources . . . are designed such that a configuration change for any of the above parameters affects all viewers” (Brannon ¶ 0033).
“a multi-stream video source may be optionally configured with the ability to spatially move the reference coordinates of an ROI stream within the scene's overall image, e.g., via some set of suitable control commands such as those implemented for Pan-Tilt-Zoom (PTZ) cameras. The ability to perform the ROI control logic may be implemented, for example, at a viewing application” (Brannon ¶ 0030).
“video source 102 may be configured to accept commands (e.g., `Pan and Tilt` commands) that allow the client viewing application 122 to move the spatial coordinates of the 320H.times.180V HQ-ROI view/stream around within the scene” (Brannon ¶ 0093).
“multi-stream HQ-ROI viewing capability may be implemented . . . to deliver two or more video streams to one or more viewing clients via a network medium. For example, as previously mentioned, a video source component and video access component may be separate components or integrated together as a single device, e.g., camera and stream server components may be one device.” (Brannon ¶ 0087).
“Video access component . . . to communicate these multiple digital video streams (not shown separately in FIG. 3) across computer network medium 112 to multiple viewing clients 120a through 120n” (Brannon ¶ 0060).
It would have been obvious to one of ordinary skill to have provided viewers of Slavin the ability to independently send control instructions for manipulating the video image so that each user can independently review the video in a customized manner (e.g. panning, tilting and/or zooming) of the event. Doing so would have allowed each viewer to customize the video content for their own reviewing purposes.
112. The system of claim 111, wherein the at least one viewing device is configured to customize the viewing orientation by applying at least one of geometric distortion correction, optical distortion correction, flip, pan, tilt, and zoom operations.
See claim 102.
Claim 103 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Slavin and Royall in view of Official Notice and Brannon.
103. The method of claim 101, wherein a server records and stores the received image data, wherein the server performs image processing operations on the stored image data, and the image processing comprises at least one of user control operations of geometric distortion correction, optical distortion correction, flip, rotation, pan, tilt, and zoom of the image data, wherein each of plural viewers independently views a different orientation and portion of the image data.
“The monitoring application server 260 may store sensor and image/video data received from the monitoring system and perform analysis of sensor and image/video data received from the monitoring system. Based on the analysis, the monitoring application server 260 may communicate with and control aspects of the monitoring system control unit 210 or the one or more user devices 240, 250. The monitoring application server 260 also may make images/video captured by the camera 230 available to the one or more user devices 240, 250 over the network 205 (e.g., through a web portal). In this regard, the one or more user devices 240, 250 may display images/video captured by the camera 230 from a remote location. This enables a user to perceive images/video of the user's property from a remote location and verify whether or not an alarm event is occurring at the user's property” (Slavin 8:49–63).
Slavin teaches recording and storing of the video at the server where the server can analyze the video. Slavin also teaches that different users may review the video and review its content. Slavin does not appear to teach that the server processes the stored video and performs image processing operations such as offering pan/tilt/zoom (PTZ) features. Official Notice is taken and Applicant admits that such image processing operations were well known in the video art:
“the processing power of the cloud server is further increased to accommodate extra user control instructions, more notably operations such as pan/tilt/zoom (PTZ), horizontal/vertical flip and rotation. Additionally, the optional image processing unit integrated to the device may be enabled to perform the flip, rotation and PTZ operations locally at the camera. These types of processing are known to those skilled in the art, such as U.S. Pat. No. 7,324,706. Other examples include U.S. Pat. Nos. 7,474,799; 7,576,767 and 8,055,070” (US Patent 9,131,257 at).
It would have been obvious to one of ordinary skill to have provided such image processing capabilities with the system components of Slavin in order to offer more revealing and customized video content for viewing/reviewing. It would have been obvious to one of ordinary skill at the time of the invention that such processing could have been provided at various locations throughout the system with predictable results, including at the server. One of ordinary skill would have recognized the predictable tradeoffs in choosing to locate this processing at the server vs. at other acceptable locations (such as at the video capture device itself).
Slavin does not describe independent user control of the image data in order to independently view portions of the video image.
Brannon however also teaches a video distribution system where client viewers may each independently control aspects of the received video event scene for customized viewing/display:
“Systems and methods for providing high-quality region of interest (HQ-ROI) viewing within an overall scene by enabling one or more HQ-ROIs to be viewed in a controllable fashion” (Brannon at ABSTRACT).
Brannon notes the disadvantage with conventional video distribution where changing viewing parameters affects all viewers. Brannon further teaches improvements where each viewer may issue control commands to manipulate their own custom view of the event without affecting others:
“The disclosed systems and methods may be implemented in one embodiment to enable optimized simultaneous viewing of multiple video sources for each individual viewing client. This is in contrast to conventional video viewing systems . . . standard single-stream camera sources . . . are designed such that a configuration change for any of the above parameters affects all viewers” (Brannon ¶ 0033).
“a multi-stream video source may be optionally configured with the ability to spatially move the reference coordinates of an ROI stream within the scene's overall image, e.g., via some set of suitable control commands such as those implemented for Pan-Tilt-Zoom (PTZ) cameras. The ability to perform the ROI control logic may be implemented, for example, at a viewing application” (Brannon ¶ 0030).
“video source 102 may be configured to accept commands (e.g., `Pan and Tilt` commands) that allow the client viewing application 122 to move the spatial coordinates of the 320H.times.180V HQ-ROI view/stream around within the scene” (Brannon ¶ 0093).
“multi-stream HQ-ROI viewing capability may be implemented . . . to deliver two or more video streams to one or more viewing clients via a network medium. For example, as previously mentioned, a video source component and video access component may be separate components or integrated together as a single device, e.g., camera and stream server components may be one device.” (Brannon ¶ 0087).
“Video access component . . . to communicate these multiple digital video streams (not shown separately in FIG. 3) across computer network medium 112 to multiple viewing clients 120a through 120n” (Brannon ¶ 0060).
It would have been obvious to one of ordinary skill to have provided viewers of Slavin the ability to independently send control instructions for manipulating the video image so that each user can independently review the video in a customized manner (e.g. panning, tilting and/or zooming) of the event. Doing so would have allowed each viewer to customize the video content for their own reviewing purposes.
Claim 104 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Slavin and Royall in view of Official Notice.
104. The method of claim 101, wherein the video capture device is equipped with an image processing unit, wherein the image processing unit includes at least one of user control operations of geometric distortion correction, optical distortion correction, rotation, flip, pan, tilt, and zoom operations.
Slavin does not appear to teach that the video capture device is equipped with image processing operations such as pan/tilt/zoom (PTZ). Official Notice is taken and Applicant admits that such image processing operations were well known in the video art:
“the processing power of the cloud server is further increased to accommodate extra user control instructions, more notably operations such as pan/tilt/zoom (PTZ), horizontal/vertical flip and rotation. Additionally, the optional image processing unit integrated to the device may be enabled to perform the flip, rotation and PTZ operations locally at the camera. These types of processing are known to those skilled in the art, such as U.S. Pat. No. 7,324,706. Other examples include U.S. Pat. Nos. 7,474,799; 7,576,767 and 8,055,070” (US Patent 9,131,257 at).
It would have been obvious to one of ordinary skill to have provided such image processing capabilities with the system components of Slavin in order to offer more revealing and customized video content for viewing. It would have been obvious to one of ordinary skill at the tim