Prosecution Insights
Last updated: April 19, 2026
Application No. 18/035,272

AUTONOMOUS SAFETY VIOLATION DETECTION SYSTEM THROUGH VIRTUAL FENCING

Non-Final OA §102§103
Filed
May 03, 2023
Examiner
KHALID, OMER
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Astoria Solutions Pte Ltd.
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
90%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
324 granted / 488 resolved
+8.4% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
513
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 488 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/12/2026 has been entered. Response to Amendment 1. This office action is in response to communications filed 2/12/2026 Claims 1, 4, 6, 8, 11, 12, 34 are amended. Claims 2, 5, 9, 10, 13, are original. Claims 15, 16, 35, 36, 38, 39 are previously presented. Claims 3, 7, 14, 17-33, 37 are canceled. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 2, 4-6, 8-13, 15-16, 34-36, 38-39 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claim(s) 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over KR 101858396 B1 Kim [English Translation provided] in view of U.S. Patent 8199009, Brunetti. 2. Regarding Claim 1, Kim discloses A method for configuring a virtual fencing system comprising an image sensor located in a region (Abstract, “a camera module 110 photographing an image of a surveillance region including a security facility and monitoring an object entering the surveillance region; and a control module setting a virtual fence on the image of the photographed surveillance region”), the method comprising: presenting an image on a user interface of a computing device (Fig. 1; Control module 121 interface), the image having been captured using the image sensor (Fig. 3: Page 4 para 8, “the camera module 110 captures an image of the surveillance area according to step S210, and transmits the captured image according to step S220. When the control module 120 receives the image, the control module 120 identifies the object in the received image according to step S230.”); receiving through the user interface a boundary (Figs. 1, 2, 5; page 4 para 2 “control module 120 receives the image of the surveillance area before the camera module 110 monitors the object. It is also possible to set a virtual fence by receiving a plurality of coordinates from the manager on the previously received image. the control module 120 receives information on a plurality of coordinates. In one embodiment, when an administrator inputs a specific coordinate through the interface 121, the user can receive this value and use it as information for generating a virtual fence.”); and storing, in connection with the image sensor, a plurality of boundaries, including the boundary (Fig. 1; Page 3 para 4 “the camera module 110 combines the boundary line estimated as an object in the image photographed by the thermal imaging camera 111. Page 3 para 5 “The database 123 stores a plurality of information about a specific object”), the storing comprising: the first level comprising an audio and/or visual alert output by a physical output device in the region (Figs. 4-5; Page 5, para 1 “an audible alarm may be output to a sensor or speaker disposed on site to allow an intruder to access the security facility”); and the second level comprising transmission of a notification to a device remote from the region (Fig. 2: 130; Page 2 para 9 “The control module 120 transmits an alarm to the administrator terminal 130” [i.e., remote from the site]). However, Kim does not explicitly disclose storing a first boundary of the plurality of boundaries in connection with a first level of a plurality of levels of an alert system, storing a second boundary of the plurality of boundaries in connection with a second level of the plurality of alert system, Brunetti teaches storing a first boundary (Col. 2 lines 19-20, “The warning zone is monitored against inadvertent incursion into a protected space”) of the plurality of boundaries in connection with a first level of a plurality of levels of an alert system (Col. 2 lines 47-48, “The workstation may store image information associated with alarm or warning events”) storing a second boundary (Col. 2 lines 20-21, “the alarm zone is monitored against intentional intrusions thereinto”) of the plurality of boundaries in connection with a second level of the plurality of alert system (Col. 2 lines 47-48, “The workstation may store image information associated with alarm or warning events”) it would have been obvious to one of ordinary skill in the art to modify the monitoring system of Kim to incorporate the multizone monitoring approach of Brunetti in order to provide tiered alerts based on proximity to a protected boundary, thereby allowing a system to generate a warning when a person approaches a boundary and higher-level alarm when the boundary is breached, improving security response time. 3. Regarding Claim 2, Kim in view of Brunetti discloses The method of claim 1, Kim discloses wherein the virtual fencing system comprises a device (Abstract, “a control module setting a virtual fence on the image of the photographed surveillance region”), the device (camera module 110) comprising: the image sensor (camera module 110); non-volatile memory (inherent); and a processor (inherent), coupled to the image sensor (camera module 110) and the non-volatile memory, the processor configured to detect a human in an image captured using the image sensor (Page 2 para 1, “detect a human body, and the sensor detects heat corresponding to the human body”), and wherein the method further comprises storing the boundary in the non-volatile memory (Page 3 para 5, “database 123 stores a plurality of information about a specific object in advance”) Kim may not explicitly disclose storing the boundary in the non-volatile memory Brunetti teaches, wherein the method further comprises storing the boundary in the non-volatile memory (Fig. 5; Col. 2 lines 47-48, “The workstation may store image information associated with alarm or warning events” see col. 4 lines 55-64, DVR) it would have been obvious to one of ordinary skill in the art to modify the monitoring system of Kim to incorporate the multizone monitoring approach of Brunetti in order to provide tiered alerts based on proximity to a protected boundary, thereby allowing a system to generate a warning when a person approaches a boundary and higher-level alarm when the boundary is breached, improving security response time. 4. Regarding Claim 4, Kim in view of Brunetti discloses The method of claim 1, wherein: Kim discloses the first level of the alert system presence of a human (Fig. 1: 501; Abstract, “identifying the object as an intruder [i.e., human, see page 2 para 2] to generate an alarm according to an identification result” Page 4 para 7, “FIG. 5, the pre-monitoring area 1 501 and the pre-monitoring area 2 502 can be further set based on the virtual fence. When either or both of the two are inputted into the system 100 as the advance warning area and an object is entered in this area, the camera module 110 can be controlled to track the movement path and movement of the object.”); and the second level of the alert system (Page 4 para 7, “FIG. 5, the pre-monitoring area 1 501 and the pre-monitoring area 2 502 can be further set based on the virtual fence. When either or both of the two are inputted into the system 100 as the advance warning area and an object is entered in this area) Kim may not explicitly disclose alert indicating a breach of the first presence of a human in proximity of the boundary; the second level of the alert system comprises an alert indicating a breach of the second boundary Brunetti teaches alert indicating a breach of the first presence of a human in proximity of the boundary (Col. 2 lines 19-20, “The warning zone is monitored against inadvertent incursion into a protected space”); the second level of the alert system comprises an alert indicating a breach of the second boundary (Col. 2 lines 20-21, “the alarm zone is monitored against intentional intrusions thereinto”) it would have been obvious to one of ordinary skill in the art to modify the monitoring system of Kim to incorporate the multizone monitoring approach of Brunetti in order to provide tiered alerts based on proximity to a protected boundary, thereby allowing a system to generate a warning when a person approaches a boundary and higher-level alarm when the boundary is breached, improving security response time. 5. Regarding Claim 5, Kim in view of Brunetti discloses The method of claim 2, wherein: Brunetti discloses the device comprises a network interface (Abstract, “The system may also include workstation configured to display and store the image information.”); and the method further comprising configuring the device to communicate through the network interface to a server (Abstract, “A remote area monitoring system” communication with a remote monitoring system implied see Col. 2 line 53, “data can be transmitted to other sites” also col. 10 lines 51-53). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 6. Claim(s) 6, 8, 9, 10, 11, 12, 13, 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by WO 2016199495A1, Yamashita. [copy provided] 7. Regarding Claim 6, Yamashita discloses A method of operating a device in a region (Fig. 1, Fig. 3, Page 7 para 6, PO1 camera=Sensor SU), the device comprising a sensor (See Fig. 1: SU-1, sensor; Fig. 2: SU (sensor device)) to implement a virtual fence (Figs. 4, 11, see page 8 para 4, boundary line BL and BL2) in the region, memory, and a physical output device (Page 7 para 1, a fixed terminal device SP requesting the video, Includes a streaming processing program for streaming to the mobile terminal device TA), wherein the sensor is configured to output a stream of image frames (Fig. 1: SU-1; Fig. 2: SU (sensor) includes a streaming processing unit 38), the method comprising: processing output of the sensor (Fig. 2: a streaming processing unit 38 and control processing unit 3) to: identify a human in the region (identifying the monitored person Ob in which the predetermined action is detected (identifier information) for identifying and identifying the sensor device SU detecting the monitored person Ob, see page 16 para 1); and compute at least one position parameter of the human in the region (Figs. 10-11: whether or not the distance Wa (position parameter) between the behavior detection line AL and the toe position FP of the monitored person Ob on the image has exceeded a preset third threshold value, page 10 para 3. Behavior detection processing unit (processing) detects a predetermined behavior of the monitored person (human) based on images acquired by the camera, page 16); selectively, storing, in the memory of the device in the region and based on the at least one position parameter (Page 7 para 5, The behavior detection algorithm storage unit 53 stores a plurality of different behavior detection algorithms in association with the positional relationship of the camera 1 that captured the image with respect to the setting area AR), a plurality of image frames of the stream of image frames output by the sensor of the device in the region (page 7 para 1, a fixed terminal device SP requesting the video, see Fig. 1); and selectively, based on a comparison of the at least one position parameter to a boundary within the region, outputting, via the physical output device, an audio and/or visual alert configured to alert the human in the region (When the distance Wa is equal to or greater than the third threshold Th3 (Wa > or equal toTh3) [i.e., comparison], it is determined that the monitored person Ob has awakened [i.e., indication of an event] …on the other hand, person Ob is not waking up, pg 10 para 3). 8. Regarding Claim 8, Yamashita discloses The method of claim 6, wherein: storing the plurality of image frames comprises recording the plurality of image frames as a video (page 7 para 1, a fixed terminal device SP requesting the video, see Fig. 1. storage unit 5 includes ROM, EEPROM [nonvolatile storage element], see page 13. the action detection line setting processing unit stores the action detection lines AL received by input in the action detection line storage unit of the storage unit [i.e., storing the boundary in memory]), and selectively storing the plurality of image frames (page 7 para 1, a fixed terminal device SP requesting the video, see Fig. 1. The control processing unit 3 uses the algorithm selection unit 35 to select a positional relationship from among the plurality of behavior detection algorithms stored in the behavior detection algorithm storage unit 53 of the storage unit 5, page 17 para 4) further comprises: repetitively processing image frames of the stream of image frames to determine whether the human is represented in the image frames (the control processing unit 3 executes the action detection process for each frame by repeatedly executing the processes S44 and S45, page 17); and based on determining that the human is no longer represented in the image frames, ending recording of the stream of image frames (based on the image acquired by the camera 1 to detect a predetermined action in the monitored person Ob (in this embodiment, presence / absence of waking up and presence / absence) and notify the notification processing unit 37 of the detection result and this action detection process S45 is complete finished, page 17 para 6). 9. Regarding Claim 9, Yamashita discloses The method of claim 6, wherein computing the at least one position parameter of the human comprises detecting a direction of motion of the human (detection can be performed by a horizontal behavior detection algorithm that uses differences in each of the horizontal and vertical movement amounts of the head and each of the horizontal and vertical movement amounts of the torso (direction of motion) as to accompany wakeup and off, page 21 para 1). 10. Regarding Claim 10, Yamashita discloses The method of claim 6, wherein the comparison of the at least one position parameter to the boundary comprises determining whether the human breached the boundary (Fig. 8; When it is determined that the monitored person Ob is after leaving (second level- breach boundary) the bed, the leaving state flag is set to “1”, page 9 para 2-3). 11. Regarding Claim 11, Yamashita discloses The method of claim 6, further comprising transmitting an indication of a safety violation in conjunction with at least one or more of the stored plurality of image frames (page 2 para 2, for example, falls from the bed [i.e. safety violation] or falls while walking, or gets out of bed and hesitates. Page 7 para 3, The setting area storage unit 51 stores a predetermined area in the image as a setting area. Pages 20, para 7 and page 21 para 1, the algorithm selection unit 35 selects a behavior detection algorithm based on the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 from a plurality of behavior detection algorithms for each predetermined period….The fall can be detected based on the headand trunk extracted from the human body region by pattern matching using, for example, a head pattern and a trunk pattern. For example, when the positional relationship of the camera 1 is directly above PO1) 12. Regarding Claim 12, Yamashita discloses The method of claim 11, wherein the one or more of the stored plurality of image frames comprise comprises a snapshot representing the human (Page 7 para 3, The setting area storage unit 51 stores a predetermined area in the image as a setting area. The setting area is an area used for at least one of the plurality of action detection algorithms to detect a predetermined action in the monitored person Ob.). 13. Regarding Claim 13, Yamashita discloses The method of claim 6, wherein the comparison of the at least one position parameter to the boundary comprises determining whether the human is in proximity to the boundary (when it is determined that the monitored person Ob is after wake-up (in proximity but not yet in breach of the boundary), the wake-up status flag is set to “1”, the notification processing unit is notified of the wakeup (first level alert), page 10). 14. Regarding Claim 15, Yamashita discloses The method of claim 6, further comprising electronically transmitting a message comprising a warning of a safety violation (a notification processing program that notifies the outside of the predetermined behavior detected by the behavior detection program, page 15 para 4). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 15. Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over WO 2016199495A1, Yamashita [copy provided] in view of WO 2020/240274 Malach et al. (hereafter Malach). 16. Regarding Claim 16, Yamashita discloses The method of claim 6, However, Yamashita does not explicitly disclose wherein: the region comprises a facility entrance comprising a pedestrian lane and a vehicle lane; and the boundary is positioned between the pedestrian lane and the vehicle lane, the method further comprising: outputting an indication that the human has breached the boundary and/or entered the vehicle lane. Malach teaches the region comprises a facility entrance ([0317], lane marks may also include freeway entrance) comprising a pedestrian lane ([0317], bike lane) and a vehicle lane ([0317], HOV lane); the boundary is positioned between the pedestrian lane and the vehicle lane ([0362], a barrier associated with a road boundary, a turn lane, a merge lane an exit ramp, a crosswalk, an intersection, a lane split, a directional arrow, or any similar features that may indicate direction of travel for a vehicle); and outputting the indication of the event comprises outputting an indication that the human has breached the boundary and/or entered the vehicle lane ([0126], system 100 may analyze the collected data and issue warnings and/or alerts to vehicle occupants based on the analysis of the collected data [i.e., human entering vehicle lane). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to capture a human using the camera system as taught in Yamashita to detect a predetermined behavior of a monitored person combining it with the invention of Malach to notify a user providing a notification/alert when the person is in the vehicle lane. Hence, improving safety for drivers. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 17. Claim(s) 34 are rejected under 35 U.S.C. 103 as being unpatentable over KR 101858396 B1 Kim [English Translation provided] in view of U.S. Patent Application 2014/0240455, Subbian et al. (hereinafter Subbian). 18. Regarding Claim 34, Kim discloses A device configured for implementing a virtual fence (Abstract, “a camera module 110 photographing an image of a surveillance region including a security facility and monitoring an object entering the surveillance region; and a control module setting a virtual fence on the image of the photographed surveillance region”), the device comprising: a physical structure (page 1, para 1, “physical structures is not so high, and it has been changed to be equipped with other kinds of alarm systems such as a lowering of the height or a cc TV due to a bad effect on aesthetics. In accordance with this change, the physical structure has not changed at all, and ccTV and crime prevention systems are being deployed everywhere”); one or more image sensors mounted to the physical structure and configured to output image frames (page 1, para 1, “physical structures is not so high, and it has been changed to be equipped with other kinds of alarm systems such as a lowering of the height or a cc TV due to a bad effect on aesthetics. In accordance with this change, the physical structure has not changed at all, and ccTV and crime prevention systems are being deployed everywhere”); a housing mounted to the physical structure (camera module inherently includes camera housing, electronics enclosure); at least one physical output device at least partially disposed in the housing (Fig. 4 camera module capturing images of surveillance area, see Page 3 para 1); at least one processor (control module/processing unit performing image analysis) at least partially disposed in the housing (camera module includes a processor, which inherently includes camera housing, electronics enclosure) and coupled to the one or more image sensors (camera module 110) and the at least one physical output device (terminal 130); and a non-transitory computer-readable storage medium (inherent) at least partially disposed in the housing and storing (Page 6 claim 3, “pre-stored object information in a database”): a boundary (page 4 para 2 “control module 120 receives the image of the surveillance area before the camera module 110 monitors the object. It is also possible to set a virtual fence by receiving a plurality of coordinates from the manager on the previously received image. the control module 120 receives information on a plurality of coordinates. In one embodiment, when an administrator inputs a specific coordinate through the interface 121, the user can receive this value and use it as information for generating a virtual fence.”); processor-executable instructions (control module/processing unit performing image analysis) that, when executed by the at least one processor (control module/processing unit performing image analysis), cause the at least one processor to perform: identifying a human in an image frame output by the one or more image sensors (Page 2 para 1, “detect a human body, and the sensor detects heat corresponding to the human body”); computing at least one position parameter of the human based on a result of the identifying (page 4 para 6, “the control module 120 identifies an intruder as an intruder, the control module 120 transmits a control signal to the camera module 110 to track the object in order to check the movement or direction of the object”); selectively causing the at least one physical output device to output an audio and/or visual alert based on a comparison of the at least one position parameter to the boundary (Figs. 4-5; Page 4 para 7, “the control module 120 can set a part of the area adjacent to the virtual fence in the surveillance area as the advance alarm area. For example, as shown in FIG. 5, the pre-monitoring area 1 501 and the pre-monitoring area 2 502 can be further set based on the virtual fence. When either or both of the two are inputted into the system 100 as the advance warning area and an object is entered in this area” Page 5, para 1 “the control module 120 generates an alarm to the intrude rand the terminal 130 of the administrator to access the security facility. In one embodiment, an audible alarm may be output to a sensor or speaker disposed on site to allow an intruder to access the security facility. The administrator terminal 130 may transmit an alarm message indicating that an intruder has been detected.” Hence, generating an alarm when an object/person crosses a virtual fence, which inherently requires comparing the object’s position to the stored boundary.); and However, Kim does not explicitly disclose selectively storing a plurality of image frames output by at least one image sensor of the one or more image sensors based at least in part on the at least one position parameter. Subbian teaches selectively storing a plurality of image frames output by at least one image sensor of the one or more image sensors based at least in part on the at least one position parameter ([0009], “the security system 10 may be one or more sensors 14, 16 that detect events within a secured area 12. The sensors 14, 16 may be door or window switches used to detect intruders entering the secured area 12.” [0010], “the security system 10 may be one or more cameras 20, 22. Video frames from the cameras 20, 22 may be saved continuously or intermittently.” [0014], “Upon detection of the activation of one of the sensors, the processor may select a camera with a field of view covering the sensor and record video from that camera.”). It would have been obvious to one of ordinary skill in the art to store the captured image frames when the intrusion event is detected in order to preserve video evidence of the event. 19. Regarding Claim 35, Kim in view of Subbian discloses The device of claim 34, Kim discloses wherein: the at least one processor comprises a single-board processor (most surveillance cameras use: embedded board, processor, memory); and the non-transitory computer-readable storage medium comprises non-volatile memory on the single-board processor (most surveillance cameras use: embedded board, processor, memory, implementing the processor and memory of the surveillance device on a single board computer is a well-known design choice to reduce system size and cost). 20. Regarding Claim 36, Kim in view of Subbian discloses The device of claim 34, Kim discloses wherein the one or more image sensors comprise at least one of a visible light sensor (Page 3 para 2, “a plurality of cameras may be provided, some of which may be a thermal camera 111 and others may be an optical camera 112” optical sensors can detect and measure light, which includes visible light), an infrared radiation sensor, an ultrasonic sensor (claimed in the alternative), a LIDAR sensor (claimed in the alternative), a RADAR sensor (claimed in the alternative), and a laser sensor (claimed in the alternative). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 21. Claim(s) 38-39 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Subbian as applied to claim 34 above, and further in view of CN 109214316A Wang [English Translation provided]. 22. Regarding Claim 38, Kim in view of Subbian discloses The device of claim 34, wherein the processor-executable instructions further cause the at least one processor to perform: Kim teaches boundary comparison, (determines whether object enters region, see page 4 para 6 and Page 5, para 1) Kim in view of Subbian does not explicitly discloses selectively transmitting the stored plurality of image frames based on the comparison of the at least one position parameter to the boundary Wang teaches selectively transmitting the stored plurality of image frames based on the comparison of the at least one position parameter to the boundary (page 6 para 5, “selecting the image with the highest score as the target of interest in the time sequence of the target picture transmitted to the server end”. Page 8 para 11, “determining the current location of the object of interest according to the image position.” Page 9 para 1-2 “the attribute type identification module 13 is further used for determining the predetermined perimeter protection area”, Page 7 para 9, “can also be stored target images containing the object of interest, the object of interest attribute category, a current position of the object of interest, the device identifier of the image collecting device, for later investigation and evidence use”). It would have been obvious to a person having ordinary skill in the art at the time of the invention to modify the intrusion detection system of Kim to incorporate the event-based video storage and transmission techniques taught by Wang and Subbian in order to provide recorded visual evidence of detected intrusions and enable remote monitoring systems to receive video associated with the detected event. Hence, improving security monitoring by allowing remote video of detected boundary violations. 23. Regarding Claim 39, Kim in view of Subbian discloses The device of claim 34, wherein the processor-executable instructions further cause the at least one processor to perform: Kim teaches boundary comparison, (determines whether object enters region, see page 4 para 6 and Page 5, para 1) Subbian teaches plurality of image frames output [0010], “the security system 10 may be one or more cameras 20, 22. Video frames from the cameras 20, 22 may be saved continuously or intermittently.” [0014], “Upon detection of the activation of one of the sensors, the processor may select a camera with a field of view covering the sensor and record video from that camera.”) Kim in view of Subbian does not explicitly disclose selectively transmitting a plurality of image frames output by at least one image sensor of the one or more image sensors as a video stream based on the comparison of the at least one position parameter to the boundary. Wang teaches selectively transmitting a plurality of image frames output by at least one image sensor of the one or more image sensors as a video stream based on the comparison of the at least one position parameter to the boundary.(page 6 para 5, “selecting the image with the highest score as the target of interest in the time sequence of the target picture transmitted to the server end”. Page 8 para 11, “determining the current location of the object of interest according to the image position.” Page 9 para 1-2 “the attribute type identification module 13 is further used for determining the predetermined perimeter protection area”, Page 7 para 9, “can also be stored target images containing the object of interest, the object of interest attribute category, a current position of the object of interest, the device identifier of the image collecting device, for later investigation and evidence use”). It would have been obvious to a person having ordinary skill in the art at the time of the invention to modify the intrusion detection system of Kim to incorporate the event-based video storage and transmission techniques taught by Wang and Subbian in order to provide recorded visual evidence of detected intrusions and enable remote monitoring systems to receive video associated with the detected event. Hence, improving security monitoring by allowing remote video of detected boundary violations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OMER KHALID whose telephone number is (571)270-5997. The examiner can normally be reached Monday- Friday 9am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Miller can be reached at (571) 272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OMER KHALID/Examiner, Art Unit 2422 /BRIAN P YENKE/Primary Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

May 03, 2023
Application Filed
Aug 13, 2024
Response after Non-Final Action
Nov 25, 2024
Non-Final Rejection — §102, §103
May 05, 2025
Response Filed
Aug 13, 2025
Final Rejection — §102, §103
Nov 11, 2025
Applicant Interview (Telephonic)
Nov 11, 2025
Examiner Interview Summary
Feb 12, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598399
IMAGE SYNCHRONIZATION FOR MULTIPLE IMAGE SENSORS
2y 5m to grant Granted Apr 07, 2026
Patent 12576814
Method for Determining a Cleaning Information, Method for Training of a Neural Network Algorithm, Control Unit, Camera Sensor System, Vehicle, Computer Program and Storage Medium
2y 5m to grant Granted Mar 17, 2026
Patent 12563165
INSTALLATION INFORMATION ACQUISITION METHOD, CORRECTION METHOD, PROGRAM, AND INSTALLATION INFORMATION ACQUISITION SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12549690
VIDEO TRANSMISSION SYSTEM, VIDEO TRANSMISSION APPARATUS, VIDEO TRANSMISSION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12548344
VIDEO PROCESSING DEVICE AND VIDEO PROCESSING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
90%
With Interview (+23.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 488 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month