Prosecution Insights
Last updated: April 19, 2026
Application No. 18/394,202

DEVICE, METHOD AND SYSTEM FOR ANALYZING VIDEO FROM CAMERAS FOR TRACKING AND ACCESS AUTHORIZATION

Non-Final OA §103
Filed
Dec 22, 2023
Examiner
DWYER, MATTHEW JAMES
Art Unit
2649
Tech Center
2600 — Communications
Assignee
Motorola Solutions Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
19
Total Applications
across all art units

Statute-Specific Performance

§103
62.8%
+22.8% vs TC avg
§102
30.2%
-9.8% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/22/2023 has been considered by the examiner. Claim Objections Applicant is advised that should claim 4 be found allowable, claim 14 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5-8, 11-13, and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (US 2019/0087662 A1, hereinafter Zhao) in view of Ho et al. (US 11436827 B1, hereinafter Ho). Regarding claim 1, Zhao teaches a method comprising: receiving, at one or more computing devices ([Figure 1, 107] and [0020] The tablet 107 may be any wireless computing device, capable of [0047] receiving information), an indication that a quick access camera mode (QAC) has been enabled at a communication device ([0035] the mobile computing device may periodically or on-demand discover the availability of such cameras via a local direct-mode wireless broadcast of a discovery packet, i.e. on-demand discover read as "quick access camera mode (QAC)" is being used via communication device 107), the one or more computing devices communicatively coupled with a plurality of cameras ([Figure 3, step 302] and [0051] a plurality of available cameras were identified in step 302, i.e. the computing device 107 communicating with a plurality of cameras) -from different camera systems ([0033] In method 300, the one or more available cameras may be, for example, a vehicular camera such as vehicular video camera 134 of FIG. 1, a body camera affixed to another user, a fixed camera coupled to a nearby utility pole, a camera equipped to a nearby ATM machine, a camera-equipped drone, or some other camera, i.e. a plurality of cameras from different camera systems); determining, at the one or more computing devices ([Figure 1, 107]), a location of the communication device ([0033] describes incorporating a current location of the mobile communication device 107 and/or the user 102 of the device 107); establishing, via the one or more computing devices ([Figure 1, 107]), a geofence around the location of the communication device, the geofence encompassing two or more cameras of the plurality of cameras ([0033 and 0037] polling available cameras in a vicinity of the device or user, vicinity read as geofence, i.e. a geofence/boundary has been determined for said computing device 107, as well as the plurality of cameras within said geofence/boundary); configuring, via the one or more computing devices, a first camera within the geofence to be accessible by the communication device- and first current video from the first camera ([0047] at step 308 in the example set forth in FIG. 3, the mobile computing device detects an indication that a particular one of the available cameras has recorded an unexpected movement or object in the area surrounding the user, in which the mobile computing device is receiving video streams from one or more of the available cameras, and [0020] allowing the user 102 to interact with content provided on the display screen, i.e. the device 107 allows the user 102 to watch current video from the first camera, and therefore a first camera within the geofence is accessible by the communication device); and first historical video from the first camera stored at one more video databases ([0025] Mobile computing device 200 may be the device 107 of FIG. 1, the mobile computing device 200 may also include an input unit (e.g., keypad, pointing device, touch-sensitive surface, etc.) 206 and a display screen 205, capable of [0020] running an operating system for the user, and the ability to have a type of video history database, i.e. the computing device 107 may have access to one or more video databases); -providing, via the one or more computing devices, the communication device with access to: the second current video from the second camera ([0047] the mobile computing device 107 is receiving video streams from one or more of the available cameras, and [0020] allowing the user 102 to interact with content provided on the display screen, i.e. a second current video from the second camera); and second historical video from the second camera stored at the one more video databases, the second camera associated with a second camera system ([Figure 3, step 306] identifies that a second camera may be associated, and may have historical video stored in [0020] a type of video history database). Zhao differs from the claimed invention in not specifically teaching -in response to a predetermined user gesture detected in first images from the first camera, the first camera associated with first camera system; in response to detecting the predetermined user gesture, -generating, from the first current video from the first camera, a feature identifier of a user of the communication device; and in response to detecting the feature identifier in second current video from a second camera, of the plurality of cameras, within the geofence. However, Ho teaches [abstract and col. 5 lines 5-10] a tracking/security system for tracking movements of an object or person in an area using a plurality of cameras. Ho further teaches -in response to a predetermined user gesture detected in first images from the first camera, the first camera associated with first camera system; in response to detecting the predetermined user gesture ([col. 2 lines 0-5] the first recognized object, recognized object read as user, is detected in a first image captured by a first camera of the plurality of cameras, and recorded information includes a first location, first location read as geofence, of the first recognized object. The user may be originally detected by [col. 5 lines 40-50] a gesture sensor within the computing device 510/551 within any camera 550 as depicted in FIG. 2 and 3, or other sensors that is usable by a user to provide input to computing device 510, i.e. a first camera associated with a first camera system within the geofence location may detect a predetermined user gesture, allowing it to be accessible), -generating, from the first current video from the first camera ([col. 2 lines 0-5 and col. 5 lines 40-50], as described in the previous element), a feature identifier of a user of the communication device ([col. 4 lines 60-67] tracker 401 is connected to an object identification system (not shown) which has a database of object features and identities, such as facial features and personal identifications, license plates and car registrations, name tag labels and name registration, or other feature and identification records, and [col.7 lines 60-67] recognizer 331, through image analysis, recognizes one or more features 418 of object 121, object 121 read as user of the communication device, i.e. the ability to generate, from current video from the first camera, a feature identifier of a user of the communication device); and in response to detecting the feature identifier in second current video from a second camera, of the plurality of cameras, within the geofence ([Figure 4b] depicts the first camera 321, and the second camera 323, both including computing device 510/551 (not shown) which is connected to recognizer 331 and is therefore capable of (col. 5 lines 40-50) detecting the predetermined user gestures or [col. 4 lines 60-65] features 418, with both cameras via first image 771 and second image 773). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include a predetermined user gesture and a feature identifier of a user, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. Regarding claim 2, Zhao teaches the communication device ([Figure 1, 107] and [0014 and 0020] 107 may be any wireless computing device and may have a user interface for communication). Zhao differs from the claimed invention in not specifically teaching in response to detecting the predetermined user gesture in both the first images from the first camera and second images from the second camera, providing, to the communication device, video from, or respective indications of, both the first camera and the second camera, to enable selection of the first camera or the second camera as an initial camera to which the communication devices is provided access to respective current video and respective historical video. However, Ho teaches in response to detecting the predetermined user gesture in both the first images from the first camera and second images from the second camera ([Figure 4b] depicts the first camera 321, and the second camera 323, both including computing device 510/551 (not shown) which is connected to recognizer 331 and is therefore capable of (col. 5 lines 40-50) detecting the predetermined user gestures, with both cameras via first image 771 and second image 773, i.e. the detection of the predetermined user gesture in the first and second camera images), providing, to the communication device, video from, or respective indications of, both the first camera and the second camera ([Figure 2] Computing Device 510, read as the communication device, which is capable of receiving videos from both the first and second cameras, and displaying them in video form to a user via Output Module 515), to enable selection of the first camera or the second camera as an initial camera to which the communication devices is provided access to respective current video and respective historical video ([col. 12 lines 40-44] the output device 403 may include one or more of a speaker for audio output, a display for visual output or video output, or a remote device for alert notification, and [col. 14 lines 19-24] output device 403 to display a video or animation using images captured by camera 323 of location 632 and camera 321 of location 612, i.e. the output device 403 may provide access to the first camera and the second camera). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include a predetermined user gesture and provide respective historical video, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. Regarding claim 3, Zhao teaches automatically authorizing ([Figure 3] step 308 atomically authorizes the user due to the authorization in step 306), via the one or more computing devices, access by the communication device ([Figure 1, 107] and [0014 and 0020] 107 may be any wireless computing device and may have a user interface for communication) to respective video associated with all of the plurality of cameras located within the geofence ([0012] for unexpected movements or objects; responsive to receiving the instruction, enabling the imaging device and capturing images or video of an area surrounding the user; and analyzing the captured images or video for unexpected movements or objects and, responsive to detecting a first unexpected movement or object in the captured images or video, transmitting, via the second wireless transceiver, the indication to the mobile computing device, i.e. the ability to detect an unexpected movement, inside of the area surrounding the user, read as geofence, and allow access to respective video associated with all of the plurality of cameras). Zhao differs from the claimed invention in not specifically teaching in response to detecting the predetermined user gesture in the first images from the first camera. However, Ho teaches as such ([Figure 4b] depicts the first camera 321, connected to recognizer 331 capable of [col. 5 lines 40-50] detecting the predetermined user gesture, including first image 771, i.e. the detection of the predetermined user gesture in the first camera images). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include a predetermined user gesture, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. Regarding claim 5, Zhao teaches the communication device ([Figure 1, 107] and [0014 and 0020] 107 may be any wireless computing device and may have a user interface for communication). Zhao differs from the claimed invention in not specifically teaching determining a path of the user relative to the geofence using locations of the communication device, the locations one or more of: received from the communication device; and determined from respective video from the first camera and the second camera; and extending the geofence, based on the path, to encompass one or more further cameras of the plurality of cameras. However, Ho teaches determining a path of the user relative to the geofence using locations of the communication device ([col. 3 lines 44-50] tracker 401 creates path 630 of the recognized object, recognized object read as user), the locations one or more of: received from the communication device; and determined from respective video from the first camera and the second camera; and extending the geofence, based on the path, to encompass one or more further cameras of the plurality of cameras ([col. 3 lines 44-50 and col.4 lines 1-10] the tracker 401 created and updates a path 630 to store locations of tracking object 601 as captured by the plurality of cameras and the tracker 401 has the ability to [col. 4 lines 40-50 and col. 11 lines 20-40] detect the tracking object 601 in new locations and then add said locations to a combined location, i.e. the ability to extend the geofence). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include the path of a user and the ability to extend the geofence, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. Regarding claim 6, Zhao teaches the communication device ([Figure 1, 107] and [0014 and 0020] 107 may be any wireless computing device and may have a user interface for communication) and -to further provide the communication device with access to: the respective current video from the one or more further cameras of the plurality of cameras ([0047] at step 308 in the example set forth in FIG. 3, the mobile computing device detects an indication that a particular one of the available cameras has recorded an unexpected movement or object in the area surrounding the user, in which the mobile computing device is receiving video streams from one or more of the available cameras, and [0020] allowing the user 102 to interact with content provided on the display screen, i.e. the device 107 allows the user 102 to watch current video from the first camera); and respective historical video from the one or more further cameras, of the plurality of cameras, stored at one more video databases ([Figure 3, step 306] identifies that a plurality of cameras associated with said communication device, and may have historical video stored in [0020] a type of video history database). Zhao differs from the claimed invention in not specifically teaching determining a path of the user relative to the geofence; extending the geofence, based on the path, to encompass one or more further cameras of the plurality of cameras; and searching for the feature identifier in respective current video from the one or more further cameras, of the plurality of cameras. However, Ho teaches determining a path of the user relative to the geofence (Figure 6, path 630); extending the geofence (the tracker 401 has the ability to [col. 4 lines 40-50 and col. 11 lines 20-40]detect the tracking object 601 in new locations and then add said locations to a combined location, i.e. the ability to extend the geofence), based on the path, based on the path, to encompass one or more further cameras of the plurality of cameras; (FIG. 6 shows object 601 is captured by the plurality of cameras, based on the path 630, i.e. the path may encompass a further camera); and searching for the feature identifier in respective current video from the one or more further cameras, of the plurality of cameras (the current video from a plurality of cameras may use [col. 4 lines 60-67 and col. 7 lines 1-9] the identification system to search for said feature identifier). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include the path of a user, the ability to extend the geofence, and a feature identifier, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. Regarding claim 7, Zhao teaches providing the communication device ([Figure 1, 107] and [0014 and 0020] 107 may be any wireless computing device and may have a user interface for communication) with access to: the third current video from the third camera; and third historical video from the third camera stored at the one more video databases ([Figure 3, step 306] identifies that a third camera may be associated with current video, and may have historical video stored in [0020] a type of video history database). Zhao differs from the claimed invention in not specifically teaching determining a path of the user relative to the geofence; extending the geofence, based on the path, to encompass at least a third camera of the plurality of cameras; and in response to detecting the feature identifier in third current video from the third camera. However, Ho teaches determining a path of the user relative to the geofence (Figure 6, path 630); extending the geofence (the tracker 401 has the ability to [col. 4 lines 40-50 and col. 11 lines 20-40] detect the tracking object 601 in new locations and then add said locations to a combined location, i.e. the ability to extend the geofence), based on the path, to encompass at least a third camera of the plurality of cameras (FIG. 6 shows object 601 is captured by the plurality of cameras, which may include a third camera, based on the path 630); and in response to detecting the feature identifier in third current video from the third camera (the third current video from the third camera may use [col. 4 lines 60-67 and col. 7 lines 1-9] the identification system). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include the path of a user, the ability to extend the geofence, and a feature identifier, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. Regarding claim 8, Zhao teaches -automatically authorizing ([Figure 3] step 308 atomically authorizes the user due to the authorization in step 306), via the one or more computing devices ([Figure 1, 107] and [0014 and 0020] 107 may be any wireless computing device and may have a user interface for communication), access by the communication device to respective video associated with all of the plurality of cameras located within the geofence as extended ([0047] at step 308 in the example set forth in FIG. 3, the mobile computing device detects an indication that a particular one of the available cameras has recorded an unexpected movement or object in the area surrounding the user, in which the mobile computing device is receiving video streams from one or more of the available cameras, and [0020] allowing the user 102 to interact with content provided on the display screen, i.e. the device 107 allows the user 102 to watch current video from the plurality of cameras within the geofence). Zhao differs from the claimed invention in not specifically teaching determining a path of the user relative to the geofence; extending the geofence. However, Ho teaches determining a path of the user relative to the geofence (Figure 6, path 630); and extending the geofence (the tracker 401 has the ability to [col. 4 lines 40-50 and col. 11 lines 20-40] detect the tracking object 601 in new locations and then add said locations to a combined location, i.e. the ability to extend the geofence). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include the path of a user and the ability to extend the geofence, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. Regarding claim 11, the claimed limitations of claim are rejected as the same reasons as set forth in claim 1, further in view of Zhao teaches a communication interface ([Figure 1, 107] and [0020] device 107 may contain a display screen for displaying a user interface to an operating system and one or more applications running on the operating system); and a computer-readable storage medium having stored thereon program instructions ([0058] an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor)). Regarding claim 12, the claimed limitations of claim are rejected as the same reasons as set forth in claim 2. Regarding claim 13, the claimed limitations of claim are rejected as the same reasons as set forth in claim 3. Regarding claim 15, the claimed limitations of claim are rejected as the same reasons as set forth in claim 5. Regarding claim 16, the claimed limitations of claim are rejected as the same reasons as set forth in claim 6. Regarding claim 17, the claimed limitations of claim are rejected as the same reasons as set forth in claim 7. Regarding claim 18, the claimed limitations of claim are rejected as the same reasons as set forth in claim 8. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (US 2019/0087662 A1, hereinafter Zhao) and of Ho et al. (US 11436827 B1, hereinafter Ho) as applied in claims above, and further in view of Dromerhauser et al. (US 2020/0169834 A1, hereinafter Dromerhauser). Regarding claim 4, Zhao teaches the first camera within the geofence to be accessible (Figure 3, step 306]) by the communication device ([Figure 1, 107] and [0014 and 0020] 107 may be any wireless computing device and may have a user interface for communication). Zhao differs from the claimed invention in not specifically teaching –in response to the predetermined user gesture detected in the images from the first camera. However, Ho teaches as such ([col. 2 lines 0-5] the first recognized object, recognized object read as user, is detected in a first image captured by a first camera of the plurality of cameras via [col. 5 lines 40-50] a gesture sensor, or other sensors that is usable by a user to provide input to computing device 510). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include a predetermined user gesture, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. The combination of Zhao and Ho differ from the claimed invention in not specifically teaching -receiving, from the communication device, authorization credentials. However, Dromerhauser teaches [abstract] a security system, which includes a network storage device operatively connected to the one or more communications networks for storing feeds [0005] retrieved from security devices, such as a camera. Dromerhauser further teaches receiving, from the communication device, authorization credentials ([0024] the video stored on the server may be requested for review, playback, enhancement (e.g., adding time stamps, location information, etc.), or on-demand review at any time with authorized credentials (e.g., username, password, granted access rights, etc.), and [0054] FIG. 2 shows digital video recorder 200 may communicate with communications network 220, and wireless devices 222, 224, and 226, i.e. with proper authorized credentials, a wireless device 222, 224, and 226 may access cameras). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Zhao and Ho to include receiving authorization credentials, as taught by Dromerhauser, in order to improve camera security and [abstract] help aggregate or combine available information for the security purposes of a user. Regarding claim 14, the claimed limitations of claim are rejected as the same reasons as set forth in claim 4, further in view of the claim objection and examiner note described above. Claims 9, 10, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. (US 2019/0087662 A1, hereinafter Zhao) and of Ho et al. (US 11436827 B1, hereinafter Ho) as applied in claims above, and further in view of Allen (US 2023/0199462 A1, hereinafter Allen). Regarding claim 9, Zhao teaches providing, to the communication device, ([Figure 1, 107] and [0020] device 107 may contain a display screen for displaying/providing a user interface to an operating system and one or more applications running on the operating system). The combination of Zhao and Ho differ from the claimed invention in not specifically teaching an electronic map showing: respective locations of the two or more cameras of the plurality of cameras; and a floorplan of at least a portion of a premises associated with the two or more cameras, wherein the respective locations of the two or more cameras are provided as selectable icons that, when selected at the communication device, cause an indication of selection to be received at the one or more computing devices, which responsively provides access to respective historical video of an associated camera. However, Allen teaches a method for [abstract] supplying surveillance video corresponding to an enhanced geospatial location for an emergency service call request. Allen further teaches an electronic map ([Figure 5]) showing: respective locations of the two or more cameras of the plurality of cameras; and a floorplan of at least a portion of a premises associated with the two or more cameras ([0027 and 0041-0042] FIG. 5 depicts a floorplan with respective locations of the two or more cameras of the plurality of cameras), wherein the respective locations of the two or more cameras are provided as selectable icons that ([0042] In FIG. 5 an interactive version 500 of the floorplan 400 of FIG. 4 is provided to the user 224 registering the site 204. Using the interactive version 500, the user 224 can place camera icons 502 or links representing cameras located at the actual physical site 204 geospatial location, i.e. selectable icons), when selected at the communication device, cause an indication of selection to be received at the one or more computing devices, which responsively provides access to respective historical video of an associated camera ([0013] access to the surveillance video is provided as selectable icons or links on the map and are [0042] remotely accessible surveillance video 206 as indicated by the camera icon 502). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Zhao and Ho to include an electronic map, a floorplan, and selectable icons that provide access to the respective cameras, as taught by Allen, in order to improve the ability to [abstract] enable the supply of surveillance video corresponding to an enhanced geospatial location and [0006] decrease delays and unnecessary danger for victims and first responders. Regarding claim 10, Zhao teaches providing, to the communication device, ([Figure 1, 107] and [0020] device 107 may contain a display screen for displaying/providing a user interface to an operating system and one or more applications running on the operating system). Zhao differs from the claimed invention in not specifically teaching the geofence being extended to include one or more further cameras, of the plurality of cameras. However, Ho teaches such as ([Figure 1 camera 329] another camera, which may enter the geofence via the object 123 entering location 369, i.e. having [col. 4 lines 10-50 and col. 11 lines 20-40] tracker 401 to update the geofence location to include location 369). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhao to include the ability to expand the geofence, as taught by Ho, in order to [abstract and col. 1 lines 15-17] allow the location of a device/object/user to be tracked within a geofence and help satisfy [col. 1 lines 53-54] the need for a non-intrusive location tracking system. The combination of Zhao and Ho differ from the claimed invention in not specifically teaching an electronic map showing: respective locations of one or more cameras of the plurality of cameras; and a floorplan of at least a portion of a premises associated with the one or more cameras; -an updated electronic map showing: the respective locations of the one or more cameras and the one or more further cameras, of the plurality of cameras; and an updated floorplan of at least an updated portion of the premises associated with the one or more cameras and the one or more further cameras. However, Allen teaches an electronic map ([Figure 5]) showing: respective locations of one or more cameras of the plurality of cameras; and a floorplan of at least a portion of a premises associated with the one or more cameras ([0027 and 0041-0042] FIG. 5 depicts a floorplan with respective locations of the two or more cameras of the plurality of cameras); and -an updated electronic map showing: the respective locations of the one or more cameras and the one or more further cameras, of the plurality of cameras; and an updated floorplan of at least an updated portion of the premises associated with the one or more cameras and the one or more further cameras ([0042] using the interactive version 500 depicted in FIG. 5, the user 224 can place camera icons 502 or links representing cameras located at the actual physical site 204 geospatial location, i.e. updating the electronic map to show the locations of cameras added to the geofence location, providing an updated portion of the premises associated with current and further cameras). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Zhao and Ho to include an electronic map, a floorplan, selectable icons that provide access to the respective cameras, and provide an updated electronic map, as taught by Allen, in order to improve the ability to [abstract] enable the supply of surveillance video corresponding to an enhanced geospatial location and [0006] decrease delays and unnecessary danger for victims and first responders. Regarding claim 19 the claimed limitations of claim are rejected as the same reasons as set forth in claim 9. Regarding claim 20, the claimed limitations of claim are rejected as the same reasons as set forth in claim 10. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li, Zenjie et al. (2024). Computer-implemented method, non-transitory computer readable storage medium storing a computer program, and system for video surveillance (US 2024/0119737 A1). Filed 2023-10-09. Discloses a video surveillance relating to optimizing the use of computer resources and utilizing an analytics application fed by video streams of a plurality of video cameras. (abstract and [0002]) Seidman, Glenn R. et al. (2021). System and method for identity discovery (US 2021/0099677 A1). Filed 2020-09-29. Discloses utilizing a plurality of cameras to create trackable data structures including a movement path, relevant image streams, time and location. (abstract) Teichman, Alexander William et al. (2018). Speech interface for vision-based monitoring system (US 2018/0246964 A1). Filed 2017-02-28. Discloses a vision-based monitoring system that communicates with the user. The system includes a plurality of sites, an identifier, a database, and user input. ([0002-0003]) Stewart, JAMES EDWARD et al. (2018). Automatic detection of zones of interest in a video (US 2018/0232592 A1). Filed 2017-02-13. Discloses remote video surveillance which includes automatic detection and definition of zones of interest in live and/or saved video. (abstract and [0003]) Renkis, Martin A. (2015). Systems and methods for an automated cloud-based video surveillance system (US 2015/0381949 A1). Filed 2015-09-04. Discloses a cloud-based video surveillance system accessible via a computing device. (abstract) Ganesh, Jaikumar et al. (2015). Sending geofence-related heuristics to multiple separate hardware components of mobile devices (US 2015/0065161 A1). Filed 2013-09-05. Discloses a technique for geofencing-related heuristics for computing devices with a plurality of sensors based on one or more heuristic inputs. (abstract and [0004]) Bocking, Andrew Douglas et al. (2009). Disabling operation of features on a handheld mobile communication device based upon location (US 2009/0322890 A1). Filed 2006-09-01. Discloses implementing subsystem or functional aspect restrictions on a wireless handheld communication device, for controlling a camera module on the device using restrictions within defined geographical boundaries. (abstract and [0001]) Nishimura, Shoji (2023). Image processing apparatus, image processing method, and non-transitory computer-readable medium (US 2023/0214024 A1). Filed 2020-05-29. Discloses a processing unit for processing a video generated by a surveillance camera, and determines whether a person included in the video performs a series of gestures. (abstract) Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW JAMES DWYER whose telephone number is (571)272-5121. The examiner can normally be reached M-Th 6:15 a.m.- 5:30 p.m. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuwen Pan can be reached at (571) 272-7855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW JAMES DWYER/Examiner, Art Unit 2649 /GEORGE ENG/ Supervisory Patent Examiner, Art Unit 2699
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Jan 22, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month