DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments, filed 10/1/2025, have been entered and made of record. Claims 1-3, and 5-11 have been amended. Claims 4, and 12-20 have been cancelled. Claims 1-3 and 5-11 are pending.
Response to Arguments
Applicant's arguments filed 10/1/2025 have been fully considered but they are not persuasive.
In re page 2, the applicant states “Claim 1 is amended to recite allowable subject matter from claim 4. Accordingly, Applicant submits this reason for rejection is moot and request allowance of claim 1.”.
In response, the examiner respectfully disagrees. Yang teaches a security surveillance system comprising: a machine learning variance analysis server, communicatively coupled to a plurality of content-triggered surveillance sensors including a plurality of cameras, said server comprising a processor executing a set of instructions for(“edge resources 110 collectively include a plurality of visual sensors 120 (e.g., cameras) for capturing visual representations and data associated with their surroundings. In some embodiments, for example, certain end-user devices 112 and/or IoT devices 114 may include one or more cameras and/or other types of visual sensors 120. Visual sensors 120 may include any type of visual or optical sensors, such as cameras, ultraviolet (UV) sensors, laser rangefinders (e.g., light detection and ranging (LIDAR)), infrared (IR) sensors, electro-optical/infrared (EO/IR) sensors, and so forth” in Para.[0051],” Deep learning neural networks, such as CNNs, are frequently used for image processing, including object/edge detection, segmentation, and classification, among other examples. Images are typically read from disk during both training and inferencing, for example, using background threads to pre-fetch images from disk and overlap the disk fetch and decode times with the other compute threads” in Para.[0218]):
receiving metadata from a camera of the plurality of cameras(“For each instance or stream of visual data (e.g., each stored video), any corresponding visual metadata that has already been generated is stored in a metadata database or cache” in Para.[0169]);
determining of a normal range of content from an aggregation of the metadata and training each camera of the plurality of cameras on a region of interest(“The data aggregators 326 may collect data from any number of the sensors 328, and perform the back-end processing function for the analysis.” in Para.[0082], Para.[0165],” The analytic image format 1807 provides fast access to image data and regions of interest within an image. Moreover, since the analytic image format 1807 stores image data as an array, the analytic image format 1807 enables visual compute library 1806 to perform computations directly on the array of image data. Visual compute library 1806 can also convert images between the analytic image format 1807 and traditional image formats 1808 (e.g., JPEG and PNG). Similarly, videos may be stored using a machine-friendly video format designed to facilitate machine-based analysis.” In Para.[0216]);
a security event determination rule filter device executing a set of instructions to: an incident in region of interest; and a physical access control facilitation actuator executing a set of instructions to actuate a portal between a first responder and one or more of a location of an incident and an object interception. (Para.[0425], [0426], [0427], [0474], [0507], [0508], [0509], [0510])
Yang is silent about a security event determination rule filter device executing a set of instructions to trigger on one or more of: a current metadata which is outside a machine learned range of normal historical value by a standard deviation, when an object exiting a region of interest is unequal to the object entering said region of interest, when occupants exit a vehicle stopped in a region of interest when a package is discarded in a region of interest, when a type of object intrudes on a region of interest which is inappropriate for the type of object, on an amplitude of metadata which exceeds a threshold; or on intrusion of an object type into an incompatible region of interest.
Smits teaches a security event determination rule filter device executing a set of instructions to trigger on one or more of: a current metadata which is outside a machine learned range of normal historical value by a standard deviation, when an object exiting a region of interest is unequal to the object entering said region of interest, when occupants exit a vehicle stopped in a region of interest when a package is discarded in a region of interest, when a type of object intrudes on a region of interest which is inappropriate for the type of object, on an amplitude of metadata which exceeds a threshold; or on intrusion of an object type into an incompatible region of interest (“the virtual fence may be watching a scene set in a largely desert area with a small amount of vegetation spread over the ground. In one application, the surveillance system on the fence may be scanning for mobile objects coming into the scene. Although due to wind, various plants 104 may move around periodically, in general their base positions remain stationary. Objects that move through the scene may be of great interest but need to be identified first. If the moving object may be cow 106, the object may be tagged as an object of less interest; it may still be tracked by the system but may be mostly ignored for analysis of intrusion. However, if the moving object may be identified as a person such as person 108 or a ground vehicle, this may trigger an alert for closer surveillance” in Para.[0051]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Yang with the above teachings of Smits in order to improve safety and quality of user’s life from unwanted object.
Therefore, the combination of Yang and Smits discloses a security surveillance system comprising: a machine learning variance analysis server communicatively coupled to a plurality of content-triggered surveillance sensors including a plurality of cameras, said server comprising a processor executing a set of instructions for: receiving metadata from a camera of the plurality of cameras; determining a normal range of content from an aggregation of the metadata; and training each camera of the plurality of cameras on a region of interest; a security event determination rule filter device, executing a set of instructions to: trigger on one or more of: a current metadata which is outside a machine learned range of normal historical value by a standard deviation, when an object exiting a region of interest is unequal to the object entering said region of interest, when occupants exit a vehicle stopped in a region of interest when a package is discarded in a region of interest, when a type of object intrudes on a region of interest which is inappropriate for the type of object, on an amplitude of metadata which exceeds a threshold; or on intrusion of an object type into an incompatible region of interest; and a physical access control facilitation actuator executing a set of instructions to actuate a portal between a first responder and one or more of a location of an incident and an object interception as amended independent claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Yang in view of Smits
Claims 1, 2, 5, 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al.(USPubN 2019/0043351; hereinafter Yang) in view of Smits et al.(USPubN 2024/0329248; hereinafter Smits).
As per claim 1, Yang teaches a security surveillance system comprising:
a machine learning variance analysis server, communicatively coupled to a plurality of content-triggered surveillance sensors including a plurality of cameras, said server comprising a processor executing a set of instructions for(“edge resources 110 collectively include a plurality of visual sensors 120 (e.g., cameras) for capturing visual representations and data associated with their surroundings. In some embodiments, for example, certain end-user devices 112 and/or IoT devices 114 may include one or more cameras and/or other types of visual sensors 120. Visual sensors 120 may include any type of visual or optical sensors, such as cameras, ultraviolet (UV) sensors, laser rangefinders (e.g., light detection and ranging (LIDAR)), infrared (IR) sensors, electro-optical/infrared (EO/IR) sensors, and so forth” in Para.[0051],” Deep learning neural networks, such as CNNs, are frequently used for image processing, including object/edge detection, segmentation, and classification, among other examples. Images are typically read from disk during both training and inferencing, for example, using background threads to pre-fetch images from disk and overlap the disk fetch and decode times with the other compute threads” in Para.[0218]):
receiving metadata from a camera of the plurality of cameras(“For each instance or stream of visual data (e.g., each stored video), any corresponding visual metadata that has already been generated is stored in a metadata database or cache” in Para.[0169]);
determining of a normal range of content from an aggregation of the metadata and training each camera of the plurality of cameras on a region of interest(“The data aggregators 326 may collect data from any number of the sensors 328, and perform the back-end processing function for the analysis.” in Para.[0082], Para.[0165],” The analytic image format 1807 provides fast access to image data and regions of interest within an image. Moreover, since the analytic image format 1807 stores image data as an array, the analytic image format 1807 enables visual compute library 1806 to perform computations directly on the array of image data. Visual compute library 1806 can also convert images between the analytic image format 1807 and traditional image formats 1808 (e.g., JPEG and PNG). Similarly, videos may be stored using a machine-friendly video format designed to facilitate machine-based analysis.” In Para.[0216]);
a security event determination rule filter device executing a set of instructions to: an incident in region of interest; and a physical access control facilitation actuator executing a set of instructions to actuate a portal between a first responder and one or more of a location of an incident and an object interception. (Para.[0425], [0426], [0427], [0474], [0507], [0508], [0509], [0510])
Yang is silent about a security event determination rule filter device executing a set of instructions to trigger on one or more of: a current metadata which is outside a machine learned range of normal historical value by a standard deviation, when an object exiting a region of interest is unequal to the object entering said region of interest, when occupants exit a vehicle stopped in a region of interest when a package is discarded in a region of interest, when a type of object intrudes on a region of interest which is inappropriate for the type of object, on an amplitude of metadata which exceeds a threshold; or on intrusion of an object type into an incompatible region of interest.
Smits teaches a security event determination rule filter device executing a set of instructions to trigger on one or more of: a current metadata which is outside a machine learned range of normal historical value by a standard deviation, when an object exiting a region of interest is unequal to the object entering said region of interest, when occupants exit a vehicle stopped in a region of interest when a package is discarded in a region of interest, when a type of object intrudes on a region of interest which is inappropriate for the type of object, on an amplitude of metadata which exceeds a threshold; or on intrusion of an object type into an incompatible region of interest (“the virtual fence may be watching a scene set in a largely desert area with a small amount of vegetation spread over the ground. In one application, the surveillance system on the fence may be scanning for mobile objects coming into the scene. Although due to wind, various plants 104 may move around periodically, in general their base positions remain stationary. Objects that move through the scene may be of great interest but need to be identified first. If the moving object may be cow 106, the object may be tagged as an object of less interest; it may still be tracked by the system but may be mostly ignored for analysis of intrusion. However, if the moving object may be identified as a person such as person 108 or a ground vehicle, this may trigger an alert for closer surveillance” in Para.[0051]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Yang with the above teachings of Smits in order to improve safety and quality of user’s life from unwanted object.
As per claim 2, Yang and Smits teach all of limitation of claim 1.
Yang teaches wherein the content-triggered mesh network of surveillance sensors comprises at least one of: an optical sensor; a chemical sensor; a vibration sensor; a combustion sensor; an audio sensor; an acceleration sensor; an infrared sensor; a temperature sensor; a three dimensional image sensor; an electro-magnetic sensor; a microphone and speaker; a pressure sensor; and a radar transceiver(Para.[0051], [0072], [0140], [0214]).
As per claim 5, Yang and Smits teach all of limitation of claim 1.
Yang teaches wherein the physical access control facilitation actuator comprises one or more of a mobile security sensor elevator, an airborne security sensor launcher, a portal actuator, and a barrier actuator(Para.[0425], [0426], [0427], [0474], [0507], [0508], [0509], [0510]).
As per claim 10, Yang and Smits teach all of limitation of claim 1.
Yang teaches wherein the physical access control facilitation actuator executes a set of instructions to signal at least one of: an alarm, an announcement, and an illumination device(Para.[0087]).
As per claim 11, Yang and Smits teach all of limitation of claim 1.
Yang teaches wherein the physical access control facilitation actuator executes a set of instructions to control one or more devices for environmental adjustments (Para.[0087]).
Yang in view of Smits and Allen
Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al.(USPubN 2019/0043351; hereinafter Yang) in view of Smits et al.(USPubN 2024/0329248; hereinafter Smits) further in view of Allen et al.(USPubN 2021/0357091; hereinafter Allen).
As per claim 6, Yang and Smits teach all of limitation of claim 1.
Yang and Smits are silent about wherein the physical access control facilitation actuator executes a set of instructions to: display projection of a map guiding a first responder to one of an incident and an intercept location.
Allen teaches wherein the physical access control facilitation actuator executes a set of instructions to: display projection of a map guiding a first responder to one of an incident and an intercept location (Para.[0202]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Yang and Smits with the above teachings of Allen in order to improve user experience in surveillance system.
As per claim 7, Yang and Smits teach all of limitation of claim 1.
Yang and Smits are silent about wherein the physical access control facilitation actuator executes a set of instructions to: display a projection of a map and a location of a security event.
Allen teaches wherein the physical access control facilitation actuator executes a set of instructions to: display a projection of a map and a location of a security event (Para.[0202]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Yang and Smits with the above teachings of Allen in order to improve user experience in surveillance system.
Yang in view of Smits and Gersten
Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al.(USPubN 2019/0043351; hereinafter Yang) in view of Smits et al.(USPubN 2024/0329248; hereinafter Smits) further in view of Gersten(USPubN 2018/0053394).
As per claim 8, Yang and Smits teach all of limitation of claim 1.
Yang and Smits are silent about wherein the physical access control facilitation actuator executes a set of instructions to: display projection of a map and video stream of most likely paths available to an object subsequent to a security event.
Gersten teaches wherein the physical access control facilitation actuator executes a set of instructions to: display projection of a map and video stream of most likely paths available to an object subsequent to a security event (Para.[0092]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Yang and Smits with the above teachings of Gersten in order to improve user experience in surveillance system.
As per claim 9, Yang and Smits teach all of limitation of claim 1.
Yang and Smits are silent about wherein the physical access control facilitation actuator executes a set of instructions to: display projection of a map and a set of video streams of a path taken by an object preceding a security event.
Gersten teaches wherein the physical access control facilitation actuator executes a set of instructions to: display projection of a map and a set of video streams of a path taken by an object preceding a security event (Para.[0092]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings Yang and Smits with the above teachings of Gersten in order to improve user experience in surveillance system.
Allowable Subject Matter
Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUNGHYOUN PARK whose telephone number is (571)270-1333. The examiner can normally be reached M - Thur 6:00 am - 4 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI Q TRAN can be reached at (571)272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUNGHYOUN PARK/Examiner, Art Unit 2484