DETAILED ACTION
This Office Action is in response to the Amendment filed on 10/28/2025.
In the filed response, claims 1, 5, 11, 12, 13, and 16 have been amended, where claims 1, 12, and 16 are independent claims. Further, claim 15 has been canceled, and new claims 19-20 have been added.
Accordingly, Claims 1-14 and 16-20 have been examined and are pending. This Action is made FINAL.
Response to Arguments
1. Applicant’s arguments with respect to the instant claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Please see examiner’s responses below.
2. Applicant argues (pgs. 9-10) the prior art (notably Wada and Andersson) do not disclose the amended features of claim 1, and that a skilled person would not consider it obvious to combine both references, especially as neither discloses “a plurality of spaced apart first trail markers... each first rail marker of the plurality of trail markers indicates a discrete time instant and corresponding geo-position of the movable camera along the trait” nor “each first trail marker of the plurality of first trail markers other than the first trail marker associated with the result of the search query represent either one of future geo-positions and future discrete time instants or past geo- positions and past discrete time instants” as now required.
3. Applicant’s arguments are acknowledged and examiner agrees that neither Wada nor Andersson teach and/or suggest the newly amended features. However based on updated searches, new prior art was identified that are deemed relevant, in particular the works of Siracusano, Jr. US 8,887,050 B1, Hesterman US 2018/0322197 A1, and Feigh et al. US 2010/0286859 A1, hereinafter referred to as Siracusano, Hersterman, and Feigh, respectively. Siracusano, in particular, discloses a map display of positions of a moving subject (i.e. target) and a moving video surveillance device (e.g. airborne or vehicle-borne camera) during a surveillance operation (col. 1 lines 54-58). Indicia are positioned on said map display to indicate the location of the surveillance device at given successive points in time indicated by triangles 209 (i.e. markers) to form a path/trail (fig. 13). Also shown is a path/trail for a selected target defined by circles 211 and 231 (fig. 15). Said map display further includes telemetry data available for all map locations (e.g. col. 2 lines 23-35). Further, Siracusano’s system can search for different objects (e.g. vehicles, etc.) in the video streams (col. 5 line 41-60 and figs. 2 and 4) which can be selected (e.g. col. 12 lines 34-48). Although support for “at least one of the first trail markers is associated with the result of the search query” is not ‘explicit’, it is believed that
given the search and display capability of Siracusano, along with the ability to follow the selected target (figs. 13 and 15), it would be reasonable to conclude that a marker (e.g. circle 231) corresponds with the search for a given instant in time. Nonetheless, to show explicit support for this feature, Hesterman is relied on, where ¶0011 in particular, shows existing location-based searches use a single position coordinate to put a marker on a map representing the start of a video. Hesterman also discloses user interactions with features on a map (e.g. ¶0078) including polyline paths that define the trajectories traversed by a videographer/UAV while recording video. Lastly, regarding support for “and each first trail marker of the plurality of first trail markers other than the first trail marker associated with the result of the search query represent either one of future geo-positions and future discrete time instants or past geo-positions and past discrete time instants”, Feigh is relied on, where a camera path is predicted for an unmanned aerial vehicle (UAV) to overlap a desired target in a surveillance area. Unlike Siracusano and Hesterman, Feigh does not explicitly refer to performing a search query, however, a user can indicate/identify desired surveillance targets or desired camera targets as a spatial constraint for a UAV flight plan (e.g. ¶0025). For e.g. in ¶0032, a user can select an target object from among a plurality of targets. Even though the term ‘search’ is not used, identifying one target from multiple targets can be construed as a search process, from which a flight plan can be determined. For e.g., a predicted camera path 600 overlying the map is shown in fig. 6, where waypoints (i.e. markers) are identified along the path. Point 302 is associated with the UAV’s current position, while subsequent points denote the predicted positions, i.e. “future geo-positions and future discrete time instants”. For these reasons, which are further elaborated on below, the works of Siracusano, Hesterman, and Feigh are deemed relevant. The examiner therefore respectfully submits that these prior art, either alone or in combination, reasonably teach and/or suggest all of the disclosed features of amended claim 1 given their BRI. The same also applies to amended claims 12 and 16. Please refer to office action below for details. Other prior art worth noting include Waniguchi et al. US 2018/0101970 A1, Takahashi et al. US 2020/0413001 A1, Chu et al. US 2024/0212224 A1, and Anderson et al. US 2010/0205203 A1 (PTO 892).
4. Applicant’s response to the objections of claims 5, 11, and 13 are acknowledged. As such, the objections are withdrawn. Applicant’s response to the rejection of claim 15 under 35 U.S.C. 112(b) is also acknowledged. As such, the rejection is withdrawn.
5. The Examiner is available to discuss the matters of this office action to help move the Instant Application forward. Please refer to the conclusion to this office action regarding scheduling interviews.
6. Accordingly, Claims 1-14 and 16-20 have been examined and are pending.
Claim Rejections - 35 USC § 103
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 1-2, 4-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Siracusano, Jr. US 8,887,050 B1, in view of Hesterman US 2018/0322197 A1, and in further view of Feigh et al. US 2010/0286859 A1, hereinafter referred to as Siracusano, Hesterman, and Feigh, respectively.
Regarding claim 1, (Currently Amended) Given the BRI of the following limitations, Siracusano teaches and/or suggests “A video management system for use in a video surveillance system [See the video surveillance, storage, and retrieval system shown in fig. 1], comprising: a processing unit configured to receive a first video stream [See fig. 1 and for e.g. col. 3 lines 15-34 where video signals may be received and encoded (col. 3 lines 35-62)] and an associated first metadata stream [Metadata may be stored for the incoming video (e.g. col. 5 lines 1-5)], the first video stream being supplied by a first movable video camera configured for travelling through a surveillance area [A surveillance sensor such as a camera on a UAV or mobile land vehicle may be moved to follow a target (i.e. movable video camera). See for e.g. col. 10 lines 26-35 with reference to fig. 13. Video streams may be viewed as shown in for e.g. figs. 2 and 4], and a user interface window configured to present the first video stream [See the user interfaces in figs. 2 and 4 that enable a user to control and present the captured video images]; wherein said processing unit [Please note the hardware layout in figs. 1 and 9] is further configured to: - receive a search query for identifying at least one of a target object, a target activity and a target incident of the first video stream within the surveillance area [A search query can be performed according to for e.g. col. 5 lines 45-60. For e.g., searching for a type of vehicle. Also note col. 12 lines 34-48 regarding using the map viewer to conduct at least 3 different types of searches (e.g. raw search). Results from said searches can be superimposed on the map for viewing], - determine a first trail of the first movable video camera within the surveillance area [See fig. 13 with respect to a marked path of travel (multiple triangles 209) corresponding to the moving surveillance camera (e.g. UAV)] based on geographical position data from the first metadata stream [The path of said camera is determined by sampling the telemetry data of both the target and camera at regularly spaced intervals in time (e.g. 3 seconds). Col. 10 lines 46-56.] and corresponding time data of the first metadata stream [See above with respect to regularly spaced time intervals. Also please note the timelines found in for e.g. fig. 4 associated with telemetry (and attribute) data], - generate and display a geo-map of the surveillance area via the user interface window [See fig. 9 with respect to added mapping capability utilizing a map data server and a shared geo-reference server (col. 8 lines 45-67). Said mapping can be used directly upon live video feeds (col. 9 lines 15-17). Also refer to live map viewer in col. 10 with reference to fig. 13], - map the first trail onto the geo-map by providing a plurality of spaced apart first trail markers represented by a first type of visual symbols [See fig. 13. Refer to the multiple triangles 209 (i.e. plurality of spaced apart first trail markers with a visual symbol) connected by a line which correspond to the path of the moving surveillance camera. A similar path can also be found for the target (solid circles 211)], wherein each first trail marker of the plurality of first trail markers indicates a discrete time instant and corresponding geo-position of the moveable camera along the trail [Each triangle 209 along the path indicates a given instant in time at a particular position determined by the telemetry data. Also note claims 2 and 10 of Siracusano], wherein at least one of the first trail markers is associated with the result of the search query [The web viewer (fig. 15) includes a search section alongside a map that displays usual contour features and a series of markers (i.e. circles) defining the path of the selected target. Results of a search can be selected and superimposed on the map (col. 12 lines 34-54) which suggests one of the markers is associated with the search performed. For explicit support, see Hesterman below]. and each first trail marker of the plurality of first trail markers other than the first trail marker associated with the result of the search query represent either one of future geo-positions and future discrete time instants or past geo-positions and past discrete time instants,- [The triangles or circles shown in fig. 13 depict the position of the camera or target, respectfully, as a function in time. As such, earlier positions to the current position can be construed as past geo-positions or past discrete time instants. For further support, please see Feigh below] monitor the plurality of first trail markers for user selection of one of the plurality of first trail markers, [Given the BRI of the limitation, see col. 11 lines 43-67 through col. 12 lines 1-30 with respect to a user locating the cursor at a desired point on the map. Since selecting different points on the map is possible, this suggests a marker(s) can also be selected. For explicit support, see Hesterman below] - respond to the selection of the one of the plurality of first trail markers by displaying the first video stream at a time instant that corresponds to a geographical position and discrete time instant of the selected first trail marker.” [The displayed map of Siracusano will respond/update according to the user’s selections noted above. Also see Hesterman below] Although Siracusano suggests “wherein at least one of the first trail markers is associated with the result of the search query”, the work of Hesterman from the same or similar field of endeavor is brought in to provide explicit support for this feature. [For e.g., ¶0011 shows that existing location-based searches use a single position coordinate to put a marker on a map representing the start of a video]. As also noted, Siracusano suggests “monitor the plurality of first trail markers for user selection of one of the plurality of first trail markers”, however, the work of Hesterman more explicitly addresses this feature. [¶0077 shows a user can click/touch a marker icon to zoom to a path which the video was recorded. ¶0078 further shows the user can select a different part of a highlighted mapped polyline path, i.e. a trajectory traversed by a videographer or UAV while recording video (¶0048). Said polyline can have markers (¶0133)]. Further, Hesterman teaches “ - respond to the selection of the one of the plurality of first trail markers by displaying the first video stream at a time instant that corresponds to a geographical position and discrete time instant of the selected first trail marker.” [See ¶0077-¶0078 above. ¶0132 also shows that a user can hover over a polyline to get a video preview by referencing time, and may be picked to change the video time at that geographic location within the video] Unlike Siracusano above, Hesterman does not teach surveillance, however, Hesterman does disclose maintaining a representation of a spatial region containing a plurality of trajectory records (i.e. video camera path polylines or camera routes) with each record comprising a sequence of time points and corresponding spatial coordinates (abstract). Further, these trajectory records can be searched by a user. For these reasons, Hesterman’s work is deemed relevant. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the video surveillance system of Siracusano to add the teachings of Hesterman as above for developing more efficient and advantageous ways of finding video footage and creating data, video, or image collection missions with geospatial data (e.g. ¶0007). Lastly, to provide further support for “and each first trail marker of the plurality of first trail markers other than the first trail marker associated with the result of the search query represent either one of future geo-positions and future discrete time instants or past geo-positions and past discrete time instants”, the work of Feigh from the same or similar field of endeavor is brought in to provide further support [In ¶0032, a user can select a target 304 from among a plurality of targets. Although the term ‘search’ is not used, identifying one target from multiple targets can be construed as a search process. Based on the selection, a flight plan can be determined. For e.g. in fig. 6, ‘predicted’ camera path 600 overlying the map has a plurality of waypoints (i.e. markers) along the path. Point 302 is associated with the UAV’s current position, while subsequent points denote the predicted positions, i.e. future geo-positions and future discrete time instants] Recognizing Feigh’s teachings above in the context of video surveillance, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the video surveillance system of Siracusano and the video management system of Hesterman (abstract), to add the teachings of Feigh as above for enabling a user to select a target for constructing a predicted camera path of a surveillance module such that it overlaps the desired target so as to garner intelligence about said target (e.g. ¶0003).
Regarding claim 2, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “wherein the processing unit [See figs. 1 and 9] is further configured to: - receive a second video stream and an associated second metadata stream [Please refer to for e.g. fig. 4 which can show a second channel (e.g. channel 2) corresponding to a second received video, where said channel has corresponding timelines for telemetry, attributes, etc. (i.e. metadata). The no. of available channels of live feeds depends on the logger (col. 10 lines 10-17)] Although Siracusano identifies a second trail of a plurality of markers, this trail does not correspond to a second movable camera as required, but instead is associated with a view of the selected target under surveillance (e.g. fig. 13). As such, the work of Hesterman from the same or similar field of endeavor is relied on to teach and/or suggest “the second video stream being from a second movable video camera configured for travelling through the surveillance area [See for e.g. ¶0030, with respect to multiple polylines (i.e. trajectory records) on a map corresponding to multiple cameras during an event. A polyline path depicts a trajectory/route taken by, for e.g., a camera mounted to a UAV/mobile device (e.g. ¶0048-¶0049). Although the term “surveillance” is not found as in Siracusano, the foregoing is in the context of surveying an area/event (e.g. ¶0031, ¶0070)], - determine a second trail of the second movable video camera within the surveillance area based on geographical position data of the second metadata stream and corresponding time data of the second metadata stream [Each trajectory record comprises a series of time points and corresponding spatial coordinates (e.g. abstract)], - map the second trail onto the geo-map using a plurality of spaced apart second trail markers represented by a second type of visual symbols [A visual representation of each mapped polyline path (e.g. figs. 1-2) can have markers ¶0133], - monitor the plurality of second trail markers for user selection, [See ¶0077-¶0078 regarding user selection. ¶0132 also shows a user can hover over a polyline to get a video preview by referencing time, and may be picked (i.e. selected) to change the video time at that geographic location within the video] - respond to the selection of second trail marker by displaying the second video stream that corresponds to a geographical position and time instant of the selected second trail marker.” [Same as above. Also note ¶0077-¶0078 where a user can click/touch a marker icon to zoom to a path which the video was recorded and for selecting a different part of a highlighted mapped polyline path] The motivation for combining Siracusano and Hesterman has been discussed in connection with claim 1, above.
Regarding claim 4, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 2, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “wherein each of first type of visual symbols comprise at least one of a rectangle, a triangle and a circle and each of second type of visual symbols comprise at least one of a rectangle, a triangle and a circle.” [Given the BRI of the foregoing limitation, Siracusano shows two paths in fig. 13, where a first path and a second path use triangles and circles, respectively.]
Regarding claim 5, (Currently Amended) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 2, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “wherein each of the first type of visual symbols and each of the second type of visual symbols differ at least by respective colors.” [See col. 7 lines 48-51 regarding colors. Also note col. 10 lines 46-56 where the colors red and blue may be used to indicate the respective paths of travel]
Regarding claim 6, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “wherein said processing unit is further configured to: - display the first video stream that corresponds to the at least one of a target object, a target activity and a target incident in the user interface window.” [See the video streams in the user interface windows shown in for e.g., figs. 2-4]
Regarding claim 7, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “wherein the search query is a user defined search query or an automatically generated search query.” [See the user-defined searches described in for e.g. col. 5 lines 41-60]
Regarding claim 8, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “wherein at least one trail marker of the plurality of trail markers corresponds to a geographical position determined from the first metadata stream.” [The path of the camera is determined by sampling telemetry data (i.e. metadata) of both the target and the camera at regularly spaced intervals in time (e.g. 3 seconds). See col. 10 lines 46-56.]
Regarding claim 9, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. However, Siracusano does not appear to address the feature of claim 9. Hesterman on the other hand from the same or similar field of endeavor is relied on to teach and/or suggest “wherein at least some of the plurality of trail markers are interpolated based on a geographical position determined from the first metadata stream.” [See Hesterman at ¶0079, for example, where an interpolated time may be calculated in the video at the map coordinates for the specific location] The motivation for combining Siracusano and Hesterman has been discussed in connection with claim 1, above.
Regarding claim 10, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “A video surveillance system comprising at least a first movable video camera [col. 10 lines 26-37 regarding a camera on a UAV or mobile land vehicle] and a video management system according to claim 1.” [See Siracusano’s system in for e.g. fig. 1]
Regarding claim 11, (Currently Amended) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 10, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “comprising one or more stationary video cameras [col. 10 lines 26-37 show that the camera can be stationary and aimed at the target by control of a remote operator] arranged at respective geo-positions of the surveillance area [Although not explicit, a stationary camera must be positioned at a designated location of the surveillance area in order to image said target under surveillance] to generate respective one or more video streams and associated metadata streams [Said stationary camera can generate a video stream of the target that can be displayed and viewed, along with associated metadata streams such as timelines, attributes, etc. (e.g. fig. 4)]; and wherein said video management system [Refer to figs. 1 and 9] is configured to receive the one or more video streams and associated metadata streams supplied by respective ones of the one or more stationary video cameras. [Refer to figs. 1 and 9, where data can be received, stored, and displayed]
Regarding claim 12, claim 12 is rejected under the same art and evidentiary limitations as determined for the system of Claim 1.
Regarding claim 13, (Currently Amended) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 12, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “further comprising the step of: h) display the first video stream that corresponds to the identified at least one of a target object, a target activity and a target incident in the user interface window.” [See the displays in a user interface window showing a target object as in for e.g. figs. 2 and 3]
Regarding claim 14, (Original) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 13, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “further comprising the steps of: - store the first video stream and the associated first metadata streams in a video data repository [Storing encoded video signals (col. 3 lines 22-34) in a server (col. 3 lines 22-34 and lines 45-50. Also reference the system layout in figs. 1 and 9] and in a metadata repository [See for e.g. data core server (col. 3 lines 54-56) for storing data associated with the video signals above], respectively, subsequent to receiving, at the video management system [Figs. 1 and 9], the first video stream and the associated first metadata stream, the first video stream being supplied by the first movable video camera [A surveillance sensor such as a camera on a UAV or mobile land vehicle may be moved to follow a target (i.e. movable video camera). See for e.g. col. 10 lines 26-35 with reference to fig. 13. Video streams may be viewed as shown in for e.g. figs. 2 and 4]; - retrieve the first video stream and associated first metadata stream [See col. 1 lines 34-39 and lines 49-67 regarding a video surveillance storage and ‘retrieval’ system that can be used for selecting a target object as further noted below] to identify the at least one of a target object, a target activity and a target incident according to searching the at least one of the first video stream and the first metadata stream to identify the at least one or the target object, the target activity and the target incident at the geo-position and the time instant within the surveillance area [Siracusano’s system can search for different objects (e.g. vehicles, etc.) in the video streams (col. 5 line 41-60 and figs. 2 and 4) which can be selected (e.g. col. 12 lines 34-48). Timelines (fig. 4), which include metadata, facilitate searches. Also note col. 12 lines 34-48 (fig. 15) regarding conducting searches from which results can be superimposed on the map for viewing], wherein the search is performed based on the search query [Please see above citations regarding search queries], - map the first trail onto the geo-map corresponding to the retrieved first video stream and the associated first metadata stream [Please refer to the paths depicted in figs. 13 and 15 which are shown displayed on the display map. Said map display further includes telemetry data available for all map locations (e.g. col. 2 lines 23-35)] wherein the map comprises one or more first trail markers [Trail markers are shown as triangles representing the video surveillance device/camera and circles representing the target subject under surveillance] representing future time instants or time periods relative to the time instant of the identified at least one of a target object, a target activity and a target incident.” [Regarding future time instants or time periods, please see Feigh below for support] Although Siracusano’s teachings are deemed relevant given the BRI of the above features, Siracusano (and Hesterman) does not appear to explicitly address trail markers “representing future time instants or time periods relative to the time instant of the identified at least one of a target object, a target activity and a target incident.” As such, the work of Feigh from the same or similar field of endeavor is brought in to teach and/or suggest these features.
[In ¶0032, a user can select a target 304 from among a plurality of targets. Although the term ‘search’ is not used, identifying one target from multiple targets can be construed as a search process. Based on the selection, a flight plan can be determined. For e.g. in fig. 6, ‘predicted’ camera path 600 overlying the map has a plurality of waypoints (i.e. markers) along the path. Point 302 is associated with the UAV’s current position, while subsequent points denote the predicted positions, i.e. future geo-positions and future discrete time instants] The motivation for combining Siracusano, Hesterman, and Feigh has been discussed in connection with claim 1, above.
Regarding claim 16, claim 16 is rejected under the same art and evidentiary limitations as determined for the system of Claim 1.
Regarding claim 17, claim 17 is rejected under the same art and evidentiary limitations as determined for the system of Claim 8.
Regarding claim 18, claim 18 is rejected under the same art and evidentiary limitations as determined for the system of Claim 9.
Regarding claim 19, (New) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Siracusano and Hesterman however do not appear to address the features of claim 19. Feigh on the other hand from the same or similar field of endeavor teaches and/or suggests “wherein the at least one of the first trail markers associated with the result of the search query is visually differentiated from the trail markers not associated with the result of the search query.” [Marker 302 (e.g. figs. 3, 5, and 6) is a graphical representation of the UAV at a location on map 300 (e.g. ¶0029). This appears to differ from the markers used for identifying the series of waypoints along the flight path] The motivation for combining Siracusano, Hesterman, and Feigh has been discussed in connection with claim 1, above.
Regarding claim 20, (New) Siracusano, Hesterman, and Feigh teach and/or suggest all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Siracusano further teaches and/or suggests “wherein a direction of travel of the movable camera along the trail is indicated by the plurality of first trail markers. [See fig. 13 in Siracusano. The camera path denoted by markers 209 indicate or at least suggest a direction of travel in the map view shown. Feigh also provides support in for e.g. fig. 6 where the waypoints along camera path 600 denote the direction of the UAV sensor]
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Siracusano, in view of Hesterman, in further view of Feigh, and in further view of Takahashi et al. US 2020/0413001 A1, hereinafter referred to as Takahashi.
Regarding claim 3, Siracusano, Hesterman, and Feigh, teach and/or suggest all the limitations of claim 2, and are analyzed as previously discussed with respect to that claim. Siracusano, Hesterman, and Feigh however do not address the features of claim 3. Takahashi on the other hand from the same or similar field of endeavor is brought in to teach and/or suggest “wherein said processing unit is further configured to: present the first video stream in a first tile of the user interface window present the second video stream in a second tile of the user interface window [Fig. 16, for example, shows a tiled screen display depicting moving camera data corresponding to moving cameras M3 and M4], and optionally present the geo-map in a third tile of the user interface window.” [The term ‘optionally’ is taken to mean the geo-map does not have to be displayed in a third tile of the user interface window. If not displayed, then Takahashi’s teachings are deemed relevant. Please note, fig. 16 does have a third tile, however, this corresponds to fixed camera (F5) data. Fig. 10 shows a tiled arrangement that includes a map display region, however, it seems that only ‘one’ moving camera data is shown versus both moving camera data. Lastly, fig. 15 depicts both moving camera data (M3 and M4) on a map, however, the map does not appear to be tiled. Takahashi does not limit the display modes to those disclosed (e.g. ¶0139), however, it is not clear whether these teachings would reasonably disclose the claimed tiled arrangement that includes the geo-map in a third tile] Recognizing the term ‘optionally’, Takahashi’s teachings above are deemed relevant. As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the systems of Siracusano, Hesterman, and Feigh to add the video surveillance system of Takahashi as above in order to help improve managing captured image data (e.g. ¶0009).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD A HANSELL JR. whose telephone number is (571)270-0615. The examiner can normally be reached Mon - Fri 10 am- 7 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD A HANSELL JR./Primary Examiner, Art Unit 2486