DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments and amendments received December 23, 2025 have been fully considered. with regard to 35 U.S.C. § 103, Applicant argues that the cited prior art does not disclose “see applicant argument pages 7-13”. This language corresponds to claims 1-25, specifically to independent claims.
As such, these have been considered but they are not persuasive as addressed below. See the rejection how the art on record reads on the claimed invention as well as the examiner's interpretation of the cited art in view of the presented claim set as outlined below. Furthermore, as outlined previously, Lu teaches:
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, 0029 and Fig. 2 Encoding Controller 208, emphasis added.
The joint encoding controller 208 by determines or considering storage constraints, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, the Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source, otherwise if data encoded beyond the required bit rate provided, the storage capacity likely will runout. Therefore, Lu clearly teaches the claimed invention “…determine whether the storage device will run out of storage capacity based on an operating parameter of the camera and a retention policy associated with the video content stored in the storage device”, since controller 208 determines using storage constraints in order to keep the storage within capacity. Furthermore, see para. 0039 Lu [0039] The input streams are encoded using the appropriate encoding parameters (e.g., bit rate) based on control of encoders 204 by joint encoding controller 208. The N input data streams S.sub.1, S.sub.2, . . . , S.sub.N, are encoded to N outputs O.sub.1, O.sub.2, . . . , O.sub.N. The outputs of the encoders are fed to buffer and output formatter 210 and stored in storage 108. The cost calculations needed to make the encoding decision can be made by controller 208 based upon the statistics from estimator 206 and the storage constraints input at 212. As such, the examiner stands with the rejection since controller 208 making a determination according storage constraint provided.
Applicant argument in regarding claim 16, the examiner stands with the rejection, in addition response to claim 1 as outlined above, Lu does provide required storage constraints, such as, quality, the length of time the video is to be archived for, the available capacity in storage device. At least, quality, length of time the video to be archived and available capacity of storage in order to process data for storage.
Applicant argument in regarding claim 19, the examiner stands with the rejection for similar reason provided to claims 1 and 16 as outlined above and under 103 rejections as outlined below.
VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS
A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984) (Claims were directed to a process of producing a porous article by expanding shaped, unsintered, highly crystalline poly(tetrafluoroethylene) (PTFE) by stretching said PTFE at a 10% per second rate to more than five times the original length. The prior art teachings with regard to unsintered PTFE indicated the material does not respond to conventional plastics processing, and the material should be stretched slowly. A reference teaching rapid stretching of conventional plastic polypropylene with reduced crystallinity combined with a reference teaching stretching unsintered PTFE would not suggest rapid stretching of highly crystalline PTFE, in light of the disclosures in the art that teach away from the invention, i.e., that the conventional polypropylene should have reduced crystallinity before stretching, and that PTFE should be stretched slowly.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-14, 16-20 and 22-25 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. US 2011/0050895 further in view of Kaushik et al. US 2021/0034278.
In regarding to claim 1 Lu teaches:
1. A video surveillance system, comprising: video content generated by a camera;
[0020] In one non-limiting exemplary embodiment 100, as shown in FIG. 1, the uncompressed output video sources of the N analog cameras 102(1) through 102(N) are first fed to an encoder engine 104, which employs a state-of-the-art video compression technique to jointly compress the multiple video sources efficiently. Instead of the fixed rate or constant rate encoding as done in the prior art, the encoder engine 104 determines the encoding rate for each video input source based on its statistics and uses statistical multiplexing to jointly reduce or minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The input of source statistics and storage constraints to the engine is shown at 106. The jointly statistical based compressed streams are then archived on the storage device 108.
Lu, 0020 and Fig. 1 camera 102, emphasis added.
a storage device;
Lu, 0020 and Fig. 1 storage device 108
and a computing system adapted to receive the video content from the camera;
Lu, 0020 and Fig. 1 Encoder Engine 104
the computing system adapted to store the video content in the storage device;
[0020] In one non-limiting exemplary embodiment 100, as shown in FIG. 1, the uncompressed output video sources of the N analog cameras 102(1) through 102(N) are first fed to an encoder engine 104, which employs a state-of-the-art video compression technique to jointly compress the multiple video sources efficiently. Instead of the fixed rate or constant rate encoding as done in the prior art, the encoder engine 104 determines the encoding rate for each video input source based on its statistics and uses statistical multiplexing to jointly reduce or minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The input of source statistics and storage constraints to the engine is shown at 106. The jointly statistical based compressed streams are then archived on the storage device 108.
Lu, 0020 and Fig. 1 Encoder Engine 104 and Storage Device 108, emphasis added.
the computing system adapted to determine whether the storage device will run out of storage capacity based on an operating parameter of the camera and a retention policy associated with the video content stored in the storage device;
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, 0029 and Fig. 2 Jone Encoding Controller 208, emphasis added.
However, Lu fails to explicitly teach, but Kaushik teaches:
and the computing system adapted to generate an alert in response to determining that the storage device will run out of storage capacity.
[0030] In some embodiments, the storage resource capacity modeling framework 102 generates alerts and notifications that are provided over network 108 to client devices 104, or to a system administrator, information technology (IT) manager, or other authorized personnel via one or more host agents. Such host agents may be implemented via computing or processing devices associated with a system administrator, IT manager or other authorized personnel. Such devices can illustratively comprise mobile telephones, laptop computers, tablet computers, desktop computers, or other types of computers or processing devices configured for communication over network 108 with the storage resource capacity modeling framework 102. For example, a given host agent may comprise a mobile telephone equipped with a mobile application configured to receive alerts from the storage resource capacity modeling framework 102 and to provide an interface for the host agent to select particular remedial measures for responding to the alert or notification. Examples of such remedial measures may include altering the provisioning of storage resources for a particular user. This may include provisioning or allocating additional storage resources to a particular user (e.g., in response to a notification or alert indicating that the currently provisioned storage resource capacity for the user will be exceeded at some designated time, or that the amount of available or free storage resources allocated to the user will fall below some designated threshold, etc.). This may alternatively include removing storage resources from a set of provisioned storage resources of a particular user (e.g., in response to a notification or alert indicating under-utilization of the set of provisioned storage resources). In some cases, the remedial measure may include migrating data stored in a set of provisioned storage resources in response to an alert or notification (e.g., from a first set of storage systems to a second set of storage systems, where the first and second sets of storage systems may have different performance characteristics, capacity, etc.).
Kaushik, 0030, 0044 and 0059, emphasis added.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaushik with the system of Au in order the computing system adapted to generate an alert in response to determining that the storage device will run out of storage capacity, as such, enable efficient allocation and provisioning of storage resources..--0003.
Note: The motivation that was applied to claim 1 above, applies equally as well to claims 2-25 as presented blow.
In regarding to claim 2 Lu and Kaushik teaches:
2. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the computing system is further adapted to: determine a remaining storage capacity of the storage device;
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, emphasis added.
Furthermore, Kaushik teaches: and determine when the storage device will run out of storage capacity based on the remaining storage capacity of the storage device,
[0044] The FIG. 2 process concludes with modifying a provisioning of storage resources of the one or more storage systems based at least in part on the overall storage resource capacity prediction in step 208. Step 208 may include determining a given one of a plurality of different time ranges when available storage resources of the one or more storage systems is expected to fall below a designated threshold based on the overall storage resource capacity prediction. Modifying the provisioning of the storage resources of the one or more storage systems may be based at least in part on the given time range when the available storage resources of the one or more storage systems is expected to fall below the designated threshold. When the given time range is within a designated time from a current time, modifying the provisioning may comprise increasing storage resources of the one or more storage systems. When the given time range is greater than the designated time from a current time, modifying the provisioning may comprise generating an alert indicating an expected time when the available storage resources of the one or more storage systems is expected to fall below the designated threshold.
Kaushik, 0030, 0044 and 0059, emphasis added.
Furthermore, Lu teaches: the operating parameter of the camera, and the retention policy associated with video content stored in the storage device.
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, emphasis added.
In regarding to claim 3 Lu and Kaushik teaches:
3. The video surveillance system of claim 2, furthermore, Kaushik teaches: wherein the alert indicates when the storage device will run out of storage capacity.
Kaushik, 0030, 0044 and 0059
In regarding to claim 4 Lu and Kaushik teaches:
4. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the operating parameter of the camera includes a target quality for video content generated by the camera.
Lu, 0026, 0052
In regarding to claim 5 Lu and Kaushik teaches:
5. The video surveillance system of claim 4, furthermore, Lu teaches: wherein the target quality for video content is derived from one or more proxy values including at least one of a frame rate value for video content generated by the camera, a resolution value for video content generated by the camera, a field of view of the camera, or a position of the camera.
Lu, 0026, 0052
In regarding to claim 6 Lu and Kaushik teaches:
6. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the operating parameter of the camera includes an amount by which a quality of the video content generated by the camera deviates from a target quality for video content generated by the camera.
Lu, 0026
In regarding to claim 7 Lu and Kaushik teaches:
7. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the operating parameter of the camera includes at least one of a bit rate or a frame rate at which the camera generates video content.
Lu, 0026, 0052
In regarding to claim 8 Lu and Kaushik teaches:
8. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the operating parameter of the camera includes a rate at which the camera transmits video content to the computing system.
Lu, 0026, 0052
In regarding to claim 9 Lu and Kaushik teaches:
9. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the retention policy includes an amount of time for which the video content generated by the camera remains stored in the storage device.
Lu, 0026, 0052
In regarding to claim 10 Lu and Kaushik teaches:
10. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the camera is a first camera of a plurality of cameras;
Lu, 0020 and Fig. 1 Camera 102
wherein the retention policy comprises a first amount of time for which first video content generated by the first camera remains stored in the storage device;
Lu, 0026
and wherein the retention policy comprises a second amount of time for which second video content generated by a second camera included in the plurality of cameras remains stored in the storage device.
Lu, 0026
In regarding to claim 11 Lu and Kaushik teaches:
11. The video surveillance system of claim 1, furthermore, Kaushik teaches: wherein the alert indicates the remaining capacity of the storage device.
Kaushik, 0030, 0044 and 0059
In regarding to claim 12 Lu and Kaushik teaches:
12. The video surveillance system of claim 1, furthermore, Kaushik teaches: wherein the alert includes a recommendation for preserving the remaining capacity of the storage device.
Kaushik, 0030, 0044 and 0059
In regarding to claim 13 Lu and Kaushik teaches:
13. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the storage device is a local server that is connected to the computing system via a local network connection.
Lu, 0033
In regarding to claim 14 Lu and Kaushik teaches:
14. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the storage device is a remote server that is connected to the computing system via an Internet connection.
Lu, 0067
In regarding to claim 16 Lu teaches:
16. A video surveillance system, comprising: a plurality of cameras adapted to generate video content;
[0020] In one non-limiting exemplary embodiment 100, as shown in FIG. 1, the uncompressed output video sources of the N analog cameras 102(1) through 102(N) are first fed to an encoder engine 104, which employs a state-of-the-art video compression technique to jointly compress the multiple video sources efficiently. Instead of the fixed rate or constant rate encoding as done in the prior art, the encoder engine 104 determines the encoding rate for each video input source based on its statistics and uses statistical multiplexing to jointly reduce or minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The input of source statistics and storage constraints to the engine is shown at 106. The jointly statistical based compressed streams are then archived on the storage device 108.
Lu, 0020 and Fig. 1 camera 102, emphasis added.
a storage device;
Lu, 0020 and Fig. 1 storage device 108
and a computing system adapted to receive video content from the plurality of video cameras;
Lu, 0020 and Fig. 1 Encoder Engine 104
the computing system adapted to store the video content in the storage device;
[0020] In one non-limiting exemplary embodiment 100, as shown in FIG. 1, the uncompressed output video sources of the N analog cameras 102(1) through 102(N) are first fed to an encoder engine 104, which employs a state-of-the-art video compression technique to jointly compress the multiple video sources efficiently. Instead of the fixed rate or constant rate encoding as done in the prior art, the encoder engine 104 determines the encoding rate for each video input source based on its statistics and uses statistical multiplexing to jointly reduce or minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The input of source statistics and storage constraints to the engine is shown at 106. The jointly statistical based compressed streams are then archived on the storage device 108.
Lu, 0020 and Fig. 1 Encoder Engine 104 and Storage Device 108, emphasis added.
the computing system adapted to determine a first rate at which video content is stored in the storage device based on one or more operating parameters of the plurality of cameras;
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, 0029 and Fig. 2 Jone Encoding Controller 208, emphasis added.
the computing system adapted to determine a second rate at which video content is removed from the storage device based on one or more retention policies associated with video content stored in the storage device;
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, 0029 and Fig. 2 Jone Encoding Controller 208, emphasis added.
the computing system adapted to determine whether the storage device will run out of capacity based in part on a comparison between the first rate and the second rate;
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, 0029 and Fig. 2 Jone Encoding Controller 208, emphasis added.
However, Lu fails to explicitly teach, but Kaushik teaches:
and responsive to determining that the storage device will run out of capacity, the computing system adapted to generate an alert that indicates the storage device will run out of storage capacity.
[0030] In some embodiments, the storage resource capacity modeling framework 102 generates alerts and notifications that are provided over network 108 to client devices 104, or to a system administrator, information technology (IT) manager, or other authorized personnel via one or more host agents. Such host agents may be implemented via computing or processing devices associated with a system administrator, IT manager or other authorized personnel. Such devices can illustratively comprise mobile telephones, laptop computers, tablet computers, desktop computers, or other types of computers or processing devices configured for communication over network 108 with the storage resource capacity modeling framework 102. For example, a given host agent may comprise a mobile telephone equipped with a mobile application configured to receive alerts from the storage resource capacity modeling framework 102 and to provide an interface for the host agent to select particular remedial measures for responding to the alert or notification. Examples of such remedial measures may include altering the provisioning of storage resources for a particular user. This may include provisioning or allocating additional storage resources to a particular user (e.g., in response to a notification or alert indicating that the currently provisioned storage resource capacity for the user will be exceeded at some designated time, or that the amount of available or free storage resources allocated to the user will fall below some designated threshold, etc.). This may alternatively include removing storage resources from a set of provisioned storage resources of a particular user (e.g., in response to a notification or alert indicating under-utilization of the set of provisioned storage resources). In some cases, the remedial measure may include migrating data stored in a set of provisioned storage resources in response to an alert or notification (e.g., from a first set of storage systems to a second set of storage systems, where the first and second sets of storage systems may have different performance characteristics, capacity, etc.).
Kaushik, 0030, 0044 and 0059, emphasis added.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaushik with the system of Au in order the computing system adapted to generate an alert in response to determining that the storage device will run out of storage capacity, as such, enable efficient allocation and provisioning of storage resources..--0003.
In regarding to claim 17 Lu and Kaushik teaches:
17. The video surveillance system of claim 16, furthermore, Lu teaches: wherein the computing system comprises a plurality of processors in communication over a network.
[0034] Encoders 204 may be implemented using a variety of techniques, including suitable software or firmware on a general or special purpose hardware processor, which, in general, may be the same or a different processor as compared to that which implements statistics estimator 206. In some instances, a general purpose processor with separate hardware acceleration, or dedicated hardware, can be used. All the encoders could be implemented on the same processor, each could be on its own processor, or one or more could be on a first processor, one or more on another processor, and so on. Video can be provided to the encoders in a manner similar to that described with respect to the statistics estimator.
Hu, 0027, 0034, emphasis added
In regarding to claim 18 Lu and Kaushik teaches:
18. The video surveillance system of claim 16, furthermore, Kaushik teaches: wherein the computing system is further adapted to: determine a remaining storage capacity of the storage device;
Kaushik, 0030, 0044 and 0059
and determine when the storage device will run out of storage capacity based on the remaining storage capacity of the storage device,
[0044] The FIG. 2 process concludes with modifying a provisioning of storage resources of the one or more storage systems based at least in part on the overall storage resource capacity prediction in step 208. Step 208 may include determining a given one of a plurality of different time ranges when available storage resources of the one or more storage systems is expected to fall below a designated threshold based on the overall storage resource capacity prediction. Modifying the provisioning of the storage resources of the one or more storage systems may be based at least in part on the given time range when the available storage resources of the one or more storage systems is expected to fall below the designated threshold. When the given time range is within a designated time from a current time, modifying the provisioning may comprise increasing storage resources of the one or more storage systems. When the given time range is greater than the designated time from a current time, modifying the provisioning may comprise generating an alert indicating an expected time when the available storage resources of the one or more storage systems is expected to fall below the designated threshold.
Kaushik, 0030, 0044 and 0059, emphasis added.
Furthermore, Lu teaches: the first rate, and the second rate.
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, 0029 and Fig. 2 Jone Encoding Controller 208, emphasis added.
In regarding to claim 19 Lu teaches:
19. A video surveillance system, comprising: first video content generated by a first camera;
[0020] In one non-limiting exemplary embodiment 100, as shown in FIG. 1, the uncompressed output video sources of the N analog cameras 102(1) through 102(N) are first fed to an encoder engine 104, which employs a state-of-the-art video compression technique to jointly compress the multiple video sources efficiently. Instead of the fixed rate or constant rate encoding as done in the prior art, the encoder engine 104 determines the encoding rate for each video input source based on its statistics and uses statistical multiplexing to jointly reduce or minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The input of source statistics and storage constraints to the engine is shown at 106. The jointly statistical based compressed streams are then archived on the storage device 108.
Lu, 0020 and Fig. 1 camera 102, emphasis added.
second video content generated by a second camera;
[0020] In one non-limiting exemplary embodiment 100, as shown in FIG. 1, the uncompressed output video sources of the N analog cameras 102(1) through 102(N) are first fed to an encoder engine 104, which employs a state-of-the-art video compression technique to jointly compress the multiple video sources efficiently. Instead of the fixed rate or constant rate encoding as done in the prior art, the encoder engine 104 determines the encoding rate for each video input source based on its statistics and uses statistical multiplexing to jointly reduce or minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The input of source statistics and storage constraints to the engine is shown at 106. The jointly statistical based compressed streams are then archived on the storage device 108.
Lu, 0020 and Fig. 1 camera 102, emphasis added.
a storage device;
Lu, 0020 and Fig. 1 storage device 108
and a computing system adapted to receive the first video content from the first camera and the second video content from the second camera;
Lu, 0020 and Figs. 1 and 2 Encoder Engine 104 and Encoder Engine 204
the computing system adapted to store the first video content and the second video content in the storage device;
[0020] In one non-limiting exemplary embodiment 100, as shown in FIG. 1, the uncompressed output video sources of the N analog cameras 102(1) through 102(N) are first fed to an encoder engine 104, which employs a state-of-the-art video compression technique to jointly compress the multiple video sources efficiently. Instead of the fixed rate or constant rate encoding as done in the prior art, the encoder engine 104 determines the encoding rate for each video input source based on its statistics and uses statistical multiplexing to jointly reduce or minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The input of source statistics and storage constraints to the engine is shown at 106. The jointly statistical based compressed streams are then archived on the storage device 108.
Lu, 0020 and Fig. 1 Encoder Engine 104 and Storage Device 108, emphasis added.
the computing system adapted to determine whether the storage device will run out of storage capacity based in part on a first operating parameter of the first camera, a second operating parameter of the second camera, and a retention policy associated with the first video content and the second video content stored in the storage device;
[0026] FIG. 2 shows non-limiting exemplary details of the encoder engine 104 of FIG. 1. N uncompressed video sources 202(1) through 202(N) are handled (for example, from cameras 102(1) through 102(N) of FIG. 1). Each video source 202 is provided to both a corresponding encoder 204(1) through 204(N) and a statistics estimator 206. Statistics estimator 206 provides the estimated statistics for each uncompressed video source 202(1) through 202(N) to joint encoding controller 208. As shown at 212, joint encoding controller 208 also takes as input the storage constraints; for example, the required quality, the length of time the video is to be archived for, the available capacity in storage device(s) 108, and so on. Joint encoding controller 208 then provides input to each encoder 204(1) through 204(N) to set the encoding (bit) rate for each video input source based on its (as well as the other video sources') statistics. Statistical multiplexing is used to minimize the overall storage cost for the given quality requirement and the targeted storage time lengths. The jointly statistical based compressed streams are then temporarily stored to buffer/output formatter 210, where optional operations of output stream formatting, such as packetization or multiplexing, may be done and the output is archived on the storage device(s) 108.
Lu, 0026, 0029 and Fig. 2 Jone Encoding Controller 208, emphasis added.
However, Lu fails to explicitly teach, but Kaushik teaches:
and the computing system adapted to generate an alert in response to determining that the storage device will run out of storage capacity.
[0030] In some embodiments, the storage resource capacity modeling framework 102 generates alerts and notifications that are provided over network 108 to client devices 104, or to a system administrator, information technology (IT) manager, or other authorized personnel via one or more host agents. Such host agents may be implemented via computing or processing devices associated with a system administrator, IT manager or other authorized personnel. Such devices can illustratively comprise mobile telephones, laptop computers, tablet computers, desktop computers, or other types of computers or processing devices configured for communication over network 108 with the storage resource capacity modeling framework 102. For example, a given host agent may comprise a mobile telephone equipped with a mobile application configured to receive alerts from the storage resource capacity modeling framework 102 and to provide an interface for the host agent to select particular remedial measures for responding to the alert or notification. Examples of such remedial measures may include altering the provisioning of storage resources for a particular user. This may include provisioning or allocating additional storage resources to a particular user (e.g., in response to a notification or alert indicating that the currently provisioned storage resource capacity for the user will be exceeded at some designated time, or that the amount of available or free storage resources allocated to the user will fall below some designated threshold, etc.). This may alternatively include removing storage resources from a set of provisioned storage resources of a particular user (e.g., in response to a notification or alert indicating under-utilization of the set of provisioned storage resources). In some cases, the remedial measure may include migrating data stored in a set of provisioned storage resources in response to an alert or notification (e.g., from a first set of storage systems to a second set of storage systems, where the first and second sets of storage systems may have different performance characteristics, capacity, etc.).
Kaushik, 0030, 0044 and 0059, emphasis added.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kaushik with the system of Au in order the computing system adapted to generate an alert in response to determining that the storage device will run out of storage capacity, as such, enable efficient allocation and provisioning of storage resources..--0003.
In regarding to claim 20 Lu and Kaushik teaches:
20. The video surveillance system of claim 1, furthermore, Lu teaches: wherein the first operating parameter of the first camera includes a first rate at which the first camera transmits video content to the computing system; and wherein the second operating parameter of the second camera includes a second rate at which the second camera transmits video content to the computing system.
Lu, 0026, 0029 and Fig. 2
In regarding to claim 22 Lu and Kaushik teaches:
22. The video surveillance system of claim 1, furthermore, Kaushik teaches: comprising a second storage device; wherein the computing system is adapted to store new video content generated by the first camera or the second camera in the second storage device in response to determining that the storage device will run out of storage capacity.
Kaushik, 0030, 0044 and 0059
In regarding to claim 23 Lu and Kaushik teaches:
23. The video surveillance system of claim 1, furthermore, Kaushik teaches: wherein the retention policy comprises a first amount of time for which the first video content generated by the first camera remains stored in the storage device;
Kaushik, 0030, 0044, 0059 and 0061
and wherein the retention policy comprises a second amount of time for which the second video content generated by the second camera remains stored in the storage device, the second amount of time different than the first amount of time.
Kaushik, 0030, 0044, 0059 and 0061
In regarding to claim 24 Lu and Kaushik teaches:
24. The video surveillance system of claim 23, furthermore, Kaushik teaches: comprising an access control device adapted to generate security data;
Kaushik, 0061
wherein the retention policy comprises a third amount of time for which the security generated by the access control device remains stored in the storage device.
Kaushik, 0061
In regarding to claim 25 Lu and Kaushik teaches:
25. The video surveillance system of claim 24, furthermore, Kaushik teaches: wherein the computing system is further adapted to: determine a remaining storage capacity of the storage device; and determine when the storage device will run out of storage capacity based in part on the first amount of time, the second amount of time, and the third amount of time.
Kaushik, 0030, 0044, 0059 and 0061
Claim Rejections - 35 USC § 103
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. US 2011/0050895 and Kaushik et al. US 2021/0034278 further in view of Lasko US 2015/0215586.
In regarding to claim 15 Lu and Kaushik teaches:
15. The video surveillance system of claim 1, however, Lu and Kaushik fails to explicitly teach, but Lasko teaches: further comprising: a plurality of display devices;
[0041] User devices 106 include fixed and mobile computing devices, such as cellular phones, smart phones, and/or tablet devices communicate with the streaming server 140 over a network cloud 110 that typically include various segments including enterprise networks, service provider networks, and cellular data networks.
[0042] The user devices 106 each include a display 150. Operators interact with the user devices 106 using user input devices 126 such as a keyboard, computer mouse, tablet pens, and touch-enabled display screens, in examples.
Lasko, 0041-0042 and Fig. 1 devices 106, emphasis added
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lasko with the system of Au and Kaushik in order further comprising: a plurality of display devices, as such, operators can select a stream from the periphery panes to display as higher bit rate stream in a focus pane...--Abstract.
Furthermore, Lu teaches: and a plurality of cameras.
Lu, Fig. 1 cameras 102
Claim Rejections - 35 USC § 103
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. US 2011/0050895 and Kaushik et al. US 2021/0034278 further in view of Edpalm et al. US 2018/0309998.
In regarding to claim 21 Lu and Kaushik teaches:
21. The video surveillance system of claim 1, however, Lu and Kaushik fails to explicitly teach, but Edpalm teaches: wherein the first operating parameter of the first camera includes at least one of a first historical bitrate at which the first camera generates video content and a first historical amount by which the first camera deviates from a first target bitrate while generating video content;
[0013] According to some variants of the method, determining the first allowable bitrate comprises receiving input on historical variations of output bitrate of previously encoded video sequences. Historical data on bitrate variations may be useful for controlling the output bitrate of the encoder. In many locations, there are regularly occurring variations in activity in the scene. For instance, if a camera is mounted for monitoring outside the staff entrance of a factory building, there may be a lot of activity in the morning, at lunch, and in the afternoon, but less activity during working hours and during the night. With knowledge of such variations, it may be possible to allocate more bits to periods of more expected activity, and to use less bits during periods with less expected activity.
Edpalm, 0013, 0015, 0055, emphasis added.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Edpalm with the system of Au and Kaushik in order wherein the first operating parameter of the first camera includes at least one of a first historical bitrate at which the first camera generates video content and a first historical amount by which the first camera deviates from a first target bitrate while generating video content, as such, long-term bit budget, the first allowable bitrate and the second allowable bitrate are complied with...--Abstract.
Furthermore, Edpalm teaches: and wherein the second operating parameter of the second camera includes at least one of a second historical bitrate at which the second camera generates video content and a second historical amount by which the second camera deviates from a second target bitrate while generating video content.
Edpalm, 0013, 0015, 0055.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T TEKLE/Primary Examiner, Art Unit 2481