Prosecution Insights
Last updated: April 19, 2026
Application No. 18/291,374

MONITORING SYSTEM, MONITORING APPARATUS, AND MONITORING METHOD

Final Rejection §102§103
Filed
Jan 23, 2024
Examiner
NIRJHAR, NASIM NAZRUL
Art Unit
2896
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
NEC Corporation
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
379 granted / 512 resolved
+6.0% vs TC avg
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
37 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
75.4%
+35.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 512 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is responsive to the correspondence filled on 11/25/25. Claims 1-22 are presented for examination. Applicant is requested to file foreign priority document. IDS Considerations The information disclosure statement (IDS) submitted on 1/23/24 is/are being considered by the examiner as the submission is in compliance with the provisions of 37 CFR 1.97. Response to Arguments Applicant's arguments filed 11/25/25 with respect to claims 1-20 have been considered but are not persuasive. Applicant argued in page 12-13 that prior art does not teach following last two limitation of independent claim. Examiner disagree on this because Larrew teach manage, (Larrew [0036] The VMS 100 also includes at least one master node 140. The master node 140 may be operative to manage the operation and/or configuration of the camera nodes 120 to receive and/or process video data from the cameras 110, coordinate storage resources of the VMS 100, generate and maintain a database related to captured video data of the VMS 100, and/or facilitate communication with a client 130 for access to video data of the system 100.) regarding each of a plurality of surveillants who monitor the sensor data (Larrew [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor [sensor] an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings.) of at least one of the plurality of monitoring targets, (Larrew Fig. 8 [0033] FIG. 2, a VMS 100 for management of edge surveillance devices in a surveillance system according to the present disclosure is depicted schematically. The VMS 100 includes a plurality of cameras 110 that are each in operative communication with a network 115. For example, as shown in FIG. 2, cameras 110a through 110g are shown. However, it should be understood that additional or fewer cameras may be provided in a VMS 100 according to the present disclosure without limitation. [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings. In this regard, changes in the VMS 100 may be monitored, and the camera allocator 144 may be responsive to modify a camera allocation from a first camera allocation configuration to a second camera allocation configuration to improve or maintain system performance. The allocation parameter may be any one or more of a plurality of parameters that are monitored and used in determining camera allocations. Thus, the allocation parameter may change in response to a number of events that may occur in the VMS 100 as described in greater detail below. only data from specific cameras 110 may be retained beyond an initial retainer period. [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) may be maintained without a reduction in size) a load index indicating labor of monitoring work; and (Larrew [0032] Given the abstraction between the video cameras 110 and the camera nodes 120 of the VMS 100, the configuration of the processing of the video data may be flexible and adaptable, which may allow for the application of even relatively complex analytical models to some or all of the video data with dynamic provisioning in response to peak analytical loads [load index indicating labor].) determine, in a case where it is determined that the predetermined event has occurred in one or more of the plurality of monitoring targets, (Larrew Fig. 2 [0045] For example, in the event of a malfunction, power loss, or another event that results in the unavailability of a camera node 120, the camera allocator 144 may detect or otherwise be notified of the unavailability [predetermined event] of the camera node. In turn, the camera allocator 144 may reassign video cameras previously associated with the unavailable node to another node 120. The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) which of the plurality of surveillants are responsible for work of monitoring the one or more of the plurality of monitoring targets based on the predetermined event that has occurred (Larrew [0045] The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable [predetermined event] camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. In this regard, the system 100 provides increased redundancy and flexibility in relation to processing video data from the cameras 100. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) and the load index. (Larrew [0044] the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. [0045] even in the absence of a camera node 120 failure, the video data feeds of the cameras 110 may be load balanced to the camera nodes 120 to allow for different analytical models or the like to be applied.) Because of claim amendments and claim scope change, prior art Shayne reference has been removed for rejecting independent claims. Claim Rejections - 35 USC § 102 The following is a quotation of 35 U.S.C. 102(a)(1)/(a)(2) which forms the basis for all obviousness rejections set forth in this Office action: (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5-6, 8, 12-13, 15 and 19-20 is/are rejected under 35 U.S.C. 102 (a)(2) as being unpatentable over Larrew (U.S. Pub. No. 20210409792 A1). Regarding to claim 1, 8 and 15: 1. Larrew teach a monitoring apparatus comprising: at least one memory storing instructions; and at least one processor configured to execute the instructions to: receive sensor data from each of a plurality of monitoring targets; (Larrew [0086] FIG. 13 illustrates an example schematic of a processing device 1300 suitable for implementing aspects of the disclosed technology. For instance, the processing device 1300 may generally describe the architecture of a camera node 130, a master node 140, and/or a client 130 The processing device 1300 includes one or more processor unit(s) 1302, memory 1304, a display 1306, and other interfaces 1308 (e.g., buttons). The memory 1304 generally includes both volatile memory (e.g., RAM) and nonvolatile memory (e.g., flash memory). An operating system 1310, such as the Microsoft Windows® operating system, the Apple macOS operating system, or the Linux operating system, resides in the memory 1304 and is executed by the processor unit(s) 1302, although it should be understood that other operating systems may be employed.) analyze a state of each of the plurality of monitoring targets based on the sensor data (Larrew [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor [sensor] an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings.) and determine, based on the state of each of the plurality of monitoring targets, (Larrew Fig. 2, FIG. 9 [0038] individual ones of the management functions may be individually allocated to one or more camera nodes 120 using leader election. This provides a robust system in which even the unavailability of a master node 140 or a camera node 120 executing some management functions can be readily corrected by applying leader election to elect a new master node 140 in the system or to reallocate [based on analyze a state] management functionality to a new camera node 120. [0028] enterprise server-based systems 20 include static camera-to-server mappings such that in the event of a server unavailability or failure, all cameras 10 mapped to the server 16 that fails become unavailable for live video streams or storage of video data, thus rendering the system 20 ineffective in the event of such a failure. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) whether or not a predetermined event has occurred; (Larrew [0017] FIG. 9 depicts an example of a second camera allocation configuration of a plurality of video cameras and camera nodes of a distributed video management system in response to the detection of a camera node being unavailable [predetermined event has occurred].) manage, (Larrew [0036] The VMS 100 also includes at least one master node 140. The master node 140 may be operative to manage the operation and/or configuration of the camera nodes 120 to receive and/or process video data from the cameras 110, coordinate storage resources of the VMS 100, generate and maintain a database related to captured video data of the VMS 100, and/or facilitate communication with a client 130 for access to video data of the system 100.) regarding each of a plurality of surveillants who monitor the sensor data (Larrew [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor [sensor] an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings.) of at least one of the plurality of monitoring targets, (Larrew Fig. 8 [0033] FIG. 2, a VMS 100 for management of edge surveillance devices in a surveillance system according to the present disclosure is depicted schematically. The VMS 100 includes a plurality of cameras 110 that are each in operative communication with a network 115. For example, as shown in FIG. 2, cameras 110a through 110g are shown. However, it should be understood that additional or fewer cameras may be provided in a VMS 100 according to the present disclosure without limitation. [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings. In this regard, changes in the VMS 100 may be monitored, and the camera allocator 144 may be responsive to modify a camera allocation from a first camera allocation configuration to a second camera allocation configuration to improve or maintain system performance. The allocation parameter may be any one or more of a plurality of parameters that are monitored and used in determining camera allocations. Thus, the allocation parameter may change in response to a number of events that may occur in the VMS 100 as described in greater detail below. only data from specific cameras 110 may be retained beyond an initial retainer period. [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) may be maintained without a reduction in size) a load index indicating labor of monitoring work; and (Larrew [0032] Given the abstraction between the video cameras 110 and the camera nodes 120 of the VMS 100, the configuration of the processing of the video data may be flexible and adaptable, which may allow for the application of even relatively complex analytical models to some or all of the video data with dynamic provisioning in response to peak analytical loads [load index indicating labor].) determine, in a case where it is determined that the predetermined event has occurred in one or more of the plurality of monitoring targets, (Larrew Fig. 2 [0045] For example, in the event of a malfunction, power loss, or another event that results in the unavailability of a camera node 120, the camera allocator 144 may detect or otherwise be notified of the unavailability [predetermined event] of the camera node. In turn, the camera allocator 144 may reassign video cameras previously associated with the unavailable node to another node 120. The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) which of the plurality of surveillants are responsible for work of monitoring the one or more of the plurality of monitoring targets based on the predetermined event that has occurred (Larrew [0045] The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable [predetermined event] camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. In this regard, the system 100 provides increased redundancy and flexibility in relation to processing video data from the cameras 100. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) and the load index. (Larrew [0044] the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. [0045] even in the absence of a camera node 120 failure, the video data feeds of the cameras 110 may be load balanced to the camera nodes 120 to allow for different analytical models or the like to be applied.) Regarding to claim 5, 12 and 19: [Claim 5] Larrew teach the monitoring apparatus according to claim 1, wherein the at least one processor is configured to execute the instructions to: manage monitoring work for which the plurality of respective surveillants can be responsible, and specify one or more of the plurality of surveillants who can be responsible for work of monitoring the monitoring target where it has been determined that the predetermined event has occurred, and determines a surveillant who is responsible for the monitoring work from among the one or more specified surveillants based on the predetermined event that has occurred and the load index. (Claim 5 is rejected for the same reason as claim 1. Larrew Fig. 9 [0052] FIG. 10 the camera allocator 144 of the master node 140 may detect this change and modify the first camera allocation configuration to the second camera allocation configuration such that camera 110d is associated with camera node 120a. [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings. In this regard, changes in the VMS 100 may be monitored, and the camera allocator 144 may be responsive to modify a camera allocation from a first camera allocation configuration to a second camera allocation configuration to improve or maintain system performance [load index]. The allocation parameter may be any one or more of a plurality of parameters that are monitored and used in determining camera allocations. Thus, the allocation parameter may change in response to a number of events that may occur in the VMS 100 as described in greater detail below. [0045] For example, in the event of a malfunction, power loss, or another event that results in the unavailability of a camera node 120, the camera allocator 144 may detect or otherwise be notified of the unavailability of the camera node. In turn, the camera allocator 144 may reassign video cameras previously associated with the unavailable node to another node 120. The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 [who is responsible for the monitoring work] that were previously in communication with the unavailable camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. In this regard, the system 100 provides increased redundancy and flexibility in relation to processing video data from the cameras 100. Further still, even in the absence of a camera node 120 failure, the video data feeds of the cameras 110 may be load balanced to the camera nodes 120 to allow for different analytical models or the like to be applied. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) Regarding to claim 6, 13 and 20: [Claim 6] Larrew teach the monitoring apparatus according to claim 1, wherein the at least one processor is configured to execute the instructions to: manage handling abilities of the plurality of respective surveillants of handling the predetermined event in the monitoring work, and determine a surveillant who is responsible for the monitoring work further based on the handling abilities. (Claim 6 is rejected for the same reason as claim 1 and 5. Larrew Fig. 9 [0052] FIG. 10 the camera allocator 144 of the master node 140 may detect this change and modify the first camera allocation configuration to the second camera allocation configuration such that camera 110d is associated with camera node 120a. Larrew [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings. In this regard, changes in the VMS 100 may be monitored, and the camera allocator 144 may be responsive to modify a camera allocation from a first camera allocation configuration to a second camera allocation configuration to improve or maintain system performance [handling abilities]. The allocation parameter may be any one or more of a plurality of parameters that are monitored and used in determining camera allocations. Thus, the allocation parameter may change in response to a number of events that may occur in the VMS 100 as described in greater detail below. Larrew Fig. 2 [0045] For example, in the event of a malfunction, power loss, or another event that results in the unavailability of a camera node 120, the camera allocator 144 may detect or otherwise be notified of the unavailability [predetermined event] of the camera node. In turn, the camera allocator 144 may reassign video cameras previously associated with the unavailable node to another node 120. The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. [0053] FIG. 11 one class of cameras may be given priority over the other class based on a particular scenario occurring which may either relate to the VMS 100 (e.g., a computational capacity/load of the VMS 100) or an external occurrence (e.g., an alarm at the facility, shift change at a facility, etc.).) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3, 9-10 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Larrew (U.S. Pub. No. 20210409792 A1). Regarding to claim 2, 9 and 16: [Claim 2] Larrew teach the monitoring apparatus according to claim 1, calculate importance of monitoring a predetermined event that has occurred in the monitoring target, and (Larrew [0019] FIG. 11 depicts an example of a second camera allocation configuration of a plurality of video cameras and camera nodes of a distributed video management system in which a video camera is disconnected from any camera node based on a priority for the video camera. Larrew Fig. 2 [0045] For example, in the event of a malfunction, power loss, or another event that results in the unavailability of a camera node 120, the camera allocator 144 may detect or otherwise be notified of the unavailability [predetermined event] of the camera node. In turn, the camera allocator 144 may reassign video cameras previously associated with the unavailable node to another node 120. The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. [0053] FIG. 11 further illustrates an example in which a total computational capacity of the VMS 100 based on the available camera nodes 120 is exceeded. In the scenario depicted in FIG. 11, a camera 110d may be disconnected from any camera node 120 such that the camera 110d may not have its video data processed by the VMS 100. That is, cameras may be selectively “dropped” if the overall VMS 100 capacity is exceeded. The cameras may have a priority value assigned, which may in part be based on an allocation parameter as described above. For instance, if two cameras are provided that have overlapping spatial coverage (e.g., one camera monitors an area from a first direction and another camera monitors the same area but from a different direction), one of the cameras having overlapping spatial coverage may have a relatively low priority. In turn, upon disconnection of one of the cameras, continuity of monitoring of the area covered by the cameras may be maintained, while reducing the computational load of the system. Upon restoration of available computational load (e.g., due to a change in the computational load of other cameras or by adding another node to the system), the disconnected camera may be reallocated to a camera node using a load-balanced approach. In other contexts, other allocation parameters may be used to determine priority, including establishing classes of cameras. For instance, cameras may be allocated to an “internal camera” class or a “periphery camera” class based on a location/field of view of cameras being internal to a facility or external to a facility. In this case, one class of cameras may be given priority [importance] over the other class based on a particular scenario occurring which may either relate to the VMS 100 (e.g., a computational capacity/load of the VMS 100) or an external occurrence (e.g., an alarm at the facility, shift change at a facility, etc.). Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) update, when the determined surveillant has performed the monitoring work, the load index of this surveillant (Larrew [0044] the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. [0045] even in the absence of a camera node 120 failure, the video data feeds of the cameras 110 may be load balanced to the camera nodes 120 to allow for different analytical models or the like to be applied.) in accordance with the importance of monitoring. (Larrew [0065] As can be appreciated, the currency of such data is not as important as in the context of real-time data. A different one or more of the encoded video format, container format, and communication protocol may be selected. For example, in such a context in which the currency of the data is of less importance, a more resilient or more bandwidth-efficient encoded video format, container format, and communication protocol may be selected that has a higher latency for providing video to the client 130. [0051] As such, the allocation parameter may relate to the video data of the camera nodes 110 being allocated. The allocation parameter may, for example, relate to a time-based parameter, the spatial coverage of the cameras, a computational load of processing the video data of a camera, an assigned class of camera, an assigned priority [importance] of a camera. [0053] FIG. 11 if two cameras are provided that have overlapping spatial coverage (e.g., one camera monitors an area from a first direction and another camera monitors the same area but from a different direction), one of the cameras having overlapping spatial coverage may have a relatively low priority. cameras may be allocated to an “internal camera” class or a “periphery camera” class based on a location/field of view of cameras being internal to a facility or external to a facility. In this case, one class of cameras may be given priority over the other class based on a particular scenario occurring which may either relate to the VMS 100 (e.g., a computational capacity/load of the VMS 100) or an external occurrence (e.g., an alarm at the facility, shift change at a facility, etc. [0065] Upon selection of an appropriate communication protocol, the network interface 126 may communicate the encoded video packets to a standard web browser at a client device using the communication protocol. In one example, a client 130 may request to view video data from a given video camera 110 in real-time.) Larrew teach both calculate importance of monitoring and load index of this surveillant. However Larrew do not explicitly state update, when the determined surveillant has performed the monitoring work, the load index of this surveillant in accordance with the importance of monitoring. However it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art because it will bring efficiency and such function are combinable because Larrew [0108] certain embodiments described hereinabove may be combinable with other described embodiments and/or arranged in other ways (e.g., process elements may be performed in other sequences). Regarding to claim 3, 10 and 17: [Claim 3] Larrew teach the monitoring apparatus according to claim 2, wherein the at least one processor is configured to execute the instructions to determine, when it has been determined that predetermined events have occurred in the plurality of monitoring targets, (Larrew [0032] The abstracted architecture of the VMS 100 may also allow for flexibility in processing data. For instance, the camera nodes 120 of the VMS 100 may apply analytical models to the video data processed at the camera node 120 to perform video analysis on the video data. The analytical model may generate analytical metadata regarding the video data. Non-limiting examples of analytical approaches include object detection, object tracking, facial recognition, pattern recognition/detection, or any other appropriate video analysis technique. Given the abstraction between the video cameras 110 and the camera nodes 120 of the VMS 100, the configuration of the processing of the video data may be flexible and adaptable, which may allow for the application of even relatively complex analytical models to some or all of the video data with dynamic provisioning in response to peak analytical loads. Larrew Fig. 2 [0045] For example, in the event of a malfunction, power loss, or another event that results in the unavailability of a camera node 120, the camera allocator 144 may detect or otherwise be notified of the unavailability [predetermined event] of the camera node. In turn, the camera allocator 144 may reassign video cameras previously associated with the unavailable node to another node 120. The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144. Larrew [0017] FIG. 9 depicts an example of a second camera allocation configuration of a plurality of video cameras and camera nodes of a distributed video management system in response to the detection of a camera node being unavailable. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) surveillants who are responsible for the monitoring work in a descending order of the importance of monitoring. (Larrew [0051] As such, the allocation parameter may relate to the video data of the camera nodes 110 being allocated. The allocation parameter may, for example, relate to a time-based parameter, the spatial coverage of the cameras, a computational load of processing the video data of a camera, an assigned class of camera, an assigned priority of a camera. The allocation parameter may be at least in part affected by the nature of the video data of a given camera. For instance, a given camera may present video data that is more computationally demanding than another camera. For instance, a first camera may be directed at a main entrance of a building. A second camera may be located in an internal hallway that is not heavily trafficked. Video analysis may be applied to both sets of video data from the first camera and the second camera to perform facial recognition. The video data from the first camera may be more computationally demanding on a camera node than the video data from the second camera simply by virtue of the nature/location of the first camera being at the main entrance and including many faces compared to the second camera. In this regard, the camera allocation parameter may be at least in part based on the video data of the particular cameras to be allocated to the camera nodes. [0065] in such a context in which the currency of the data is of less importance, a more resilient or more bandwidth-efficient encoded video format, container format, and communication protocol may be selected that has a higher latency for providing video to the client 130.) Claims 4, 11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Larrew (U.S. Pub. No. 20210409792 A1), in view of Shayne (U.S. Pub. No. 20210232824 A1). Regarding to claim 4, 11 and 18: [Claim 4] Larrew teach the monitoring apparatus according to claim 2, Larrew do not explicitly teach wherein the monitoring target is a mobile body, and the at least one processor is configured to execute the instructions to calculate the importance of monitoring in accordance with a number of passengers in the mobile body, a situation of the road on which the mobile body is traveling, or a combination thereof. However Shayne teach wherein the monitoring target is a mobile body, and the at least one processor is configured to execute the instructions to calculate the importance of monitoring in accordance with a number of passengers in the mobile body, a situation of the road on which the mobile body is traveling, or a combination thereof. (Shayne [0040] In the example of FIG. 1, the camera 102 includes a rule that an action is triggered when a vehicle crosses a virtual line crossing in the driveway 103. The action can include, for example, the server 122 sending a notification to a resident, or the server 122 sending a command to activate driveway lights. A user can evaluate the video clip 104 to determine the ground truth 136. The ground truth 136 can include that an object is present, that the object is the vehicle 105, and that the vehicle 105 moves into the driveway 103 and parks. The ground truth 136 can also include that the vehicle 105 crosses the virtual line crossing at a time 3.0 seconds of the video clip 104. The ground truth 136 can also include the color, size, make, and model of the vehicle 105. [0033] The evaluating server 130 can receive additional video clips from additional servers. For example, the evaluating server 130 can receive video clip 108 captured by camera 106 and sent by server 124. The video clip 108 includes an image of a person walking.) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Larrew, further incorporating Shayne in video/camera technology. One would be motivated to do so, to incorporate calculate the importance of monitoring in accordance with a number of passengers in the mobile body, a situation of the road on which the mobile body is traveling. This will accommodate enhanced capabilities with predictable results. Claims 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Larrew (U.S. Pub. No. 20210409792 A1), in view of, further in view of Dedeoglu (U.S. Pub. No.20140333776 A1). Regarding to claim 7 and 14: [Claim 7] Larrew teach the monitoring apparatus according to claim 1, wherein: the at least one processor is further configured to execute the instructions to display, on a display device used by the determined surveillant, (Larrew Fig. 6 [0086] FIG. 13 illustrates an example schematic of a processing device 1300 suitable for implementing aspects of the disclosed technology. For instance, the processing device 1300 may generally describe the architecture of a camera node 130, a master node 140, and/or a client 130 The processing device 1300 includes one or more processor unit(s) 1302, memory 1304, a display 1306, ) one or more pieces of sensor data received from the monitoring target (Larrew [0044] Furthermore, the camera allocator 144 may be operative to dynamically reconfigure the camera-to-node mappings in a load balancing process. In this regard, the camera allocator 144 may monitor [sensor] an allocation parameter at each camera node 120 to determine whether to modify the camera-to-node mappings. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) where it has been determined that the predetermined event has occurred, (Larrew Fig. 2 [0045] For example, in the event of a malfunction, power loss, or another event that results in the unavailability of a camera node 120, the camera allocator 144 may detect or otherwise be notified of the unavailability [predetermined event] of the camera node. In turn, the camera allocator 144 may reassign video cameras previously associated with the unavailable node to another node 120. The camera allocator 144 may communicate with the reassigned cameras 110 to update the instructions for communication with the new camera node 120. Alternatively, the newly assigned camera node may assume the role of establishing contact with and processing video data from the video cameras 110 that were previously in communication with the unavailable camera node 120 to update the instructions and establish the new camera-to-node assignment based on the new assignment provided by the camera allocator 144.) and the at least one processor is configured to execute the instructions to display, (Larrew [0076] The video data provided to the client 130 for rendering in the video display 402 may include metadata such as analytics metadata. As described above, such analytics metadata may relate to any appropriate video analysis applied to the video data and may include, for example, highlighting of detected objects, identification of objects, identification of individuals, object tracks, etc. Thus, the video data may be annotated to include some analytics metadata. The analytics metadata may be embodied in the video data or may be provided via a separate data channel. In the example in which the analytics metadata is provided via a separate channel, the client 130 may receive the analytics metadata and annotate the video data in the video display 402 when rendered in the user interface 400.) when it is not determined that the predetermined event has occurred in one or more of the monitoring targets, (Larrew [0017] FIG. 9 depicts an example of a second camera allocation configuration of a plurality of video cameras and camera nodes of a distributed video management system in response to the detection of a camera node being unavailable [not determined that the predetermined event has occurred]. Larrew [0058] Thus, a highly valuable video data feed (e.g., video data related to a critical location such as a building entrance or a highly secure area of a facility) [plurality of monitoring targets] may be maintained without a reduction in size) Larrew do not explicitly teach a first monitoring screen for monitoring the plurality of monitoring targets on the display device to cause the surveillant to monitor the plurality of monitoring targets using the first monitoring screen, and display, when surveillant who is responsible for work of monitoring the monitoring target where it has been determined that the predetermined event has occurred is determined, second monitoring screen for monitoring the monitoring target where it has been determined that the predetermined event has occurred on a display device used by the determined surveillant to cause the surveillant to monitor the monitoring target where it has been determined that the predetermined event has occurred. However Dedeoglu teach a first monitoring screen for monitoring the plurality of monitoring targets on the display device to cause the surveillant to monitor the plurality of monitoring targets using the first monitoring screen, and display, when surveillant who is responsible for work of monitoring the monitoring target where it has been determined that the predetermined event has occurred is determined, (Dedeoglu Fig. 2 [0033] the physical composition may be dynamically determined based on the number of streams selected for display, i.e., the fewer the number of streams, the larger the display area for each stream. In another example, in some embodiments, surveillance video streams may be selected based on priority of the events detected in the streams. In another example, in some embodiments, surveillance video streams may be selected based on the types of the events detected in the streams, e.g., video streams with "bicycle detected" events are selected.) a second monitoring screen for monitoring the monitoring target where it has been determined that the predetermined event has occurred on a display device used by the determined surveillant to cause the surveillant to monitor the monitoring target where it has been determined that the predetermined event has occurred. (Dedeoglu Fig. 2 [0058] As shown in FIG. 7, surveillance video streams generated by multiple surveillance video cameras are received 700 and are displayed 702 on multiple monitors. The surveillance video streams and accompanying metadata streams, if any, are analyzed for event detection. Analysis of surveillance video streams and/or metadata streams to detect events is previously described herein. When events are present 704, a summary view of selected 706 video streams with events is composed and relevant portions of frames of the selected video streams are displayed 708 in the summary view. As previously mentioned, the relevant part of a frame may be, for example, the part of the frame corresponding to the zone in which the event was detected or the part of the frame corresponding to the bounding box of an object that triggered the event. [0059] Further, as previously mentioned, selection of surveillance video streams to be included a summary view and the physical composition of the summary view, i.e., where each video stream is to be displayed and how much display area is allocated to each stream, is implementation dependent. For example, in some embodiments, surveillance streams may be selected on a first in first out (FIFO) basis. In another example, in some embodiments, a fixed physical composition may be used in which the display area is divided into some number of fixed size windows. In another example, in some embodiments, the physical composition may be dynamically determined based on the number of streams selected for display, i.e., the fewer the number of streams, the larger the display area for each stream. In another example, in some embodiments, surveillance video streams may be selected based on priority of the events detected in the streams. In another example, in some embodiments, surveillance video streams may be selected based on the types of the events detected in the streams, e.g., video streams with "bicycle detected" events are selected) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Larrew, further incorporating Dedeoglu in video/camera technology. One would be motivated to do so, to incorporate second monitoring screen for monitoring the monitoring target where it has been determined that the predetermined event has occurred on a display device used by the determined surveillant to cause the surveillant to monitor the monitoring target where it has been determined that the predetermined event has occurred. This functionality will improve user experience with predictable results. Claims 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Larrew (U.S. Pub. No. 20210409792 A1), in view of, further in view of Gupte (U.S. Pub. No. 20220035684 A1). Regarding to claim 22: 22. Larrew teach the monitoring apparatus according to claim 1, wherein the at least one processor is configured to execute the instructions to manage the load index indicating labor of monitoring work and determine, when it is determined that the predetermined event has occurred in one or more of the monitoring targets, one of the plurality of surveillants who is responsible for work of monitoring the monitoring target where it has been determined that the predetermined event has occurred based on the predetermined event that has occurred and the load index. (Claim 22 is rejected for the same reason as claim 1) Larrew do not explicitly teach and handling abilities, the handling abilities indicating time required to issue instructions for the predetermined event for each of the plurality of surveillants; and the handling abilities. However Gupte teach and handling abilities, the handling abilities indicating time required to issue instructions for the predetermined event for each of the plurality of surveillants; and the handling abilities. (Gupte [0057]the load balancer maintains a table of clients including a hardware accelerator assigned to perform the work submitted by the client and an average time taken to perform the work for various hardware accelerators. In one example, all of the clients are assigned to the VIC, after an interval of time, the average time taken to perform the work is determined. In such examples, if the average time taken to perform the work exceeds a threshold, the client is assigned to another hardware accelerator such as the GPU. In various embodiments, this process is repeated until the average time for all clients is below the threshold. As described in greater detail below, the threshold can be determined based at least in part on various factors such as a frame rate of the video and/or a frame processing deadline, in accordance with at least one embodiment. In various embodiments, the load balancer, after an interval of time, determines if one or more clients can be reassigned to the VIC. In one example, if the average time of the clients assigned to the VIC are below the threshold, the load balancer determines one or more clients assigned to other hardware accelerators to reassign to the VIC.) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Larrew, further incorporating Gupte in video/camera technology. One would be motivated to do so, to incorporate handling abilities, the handling abilities indicating time required to issue instructions for the predetermined event for each of the plurality of surveillants. This functionality will improve quality with predictable results. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIM N NIRJHAR whose telephone number is (571) 272-3792. The examiner can normally be reached on Monday - Friday, 8 am to 5 pm ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William F Kraig can be reached on (571) 272-8660. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NASIM N NIRJHAR/Primary Examiner, Art Unit 2896
Read full office action

Prosecution Timeline

Jan 23, 2024
Application Filed
Aug 24, 2025
Non-Final Rejection — §102, §103
Nov 25, 2025
Response Filed
Jan 25, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598324
DEPTH DIFFERENCES IN PLACE OF MOTION VECTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12593131
VELOCITY MATCHING IMAGING OF A TARGET ELEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12593074
SYSTEMS AND METHODS OF BUFFERING IMAGE DATA BETWEEN A PIXEL PROCESSOR AND AN ENTROPY CODER
2y 5m to grant Granted Mar 31, 2026
Patent 12587662
METHOD, APPARATUS AND STORAGE MEDIUM FOR IMAGE ENCODING/DECODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587628
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.7%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 512 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month