DETAILED ACTION
Claims 1-14 and 21-26 are pending in this application. Claims 1,3, 5,9-11, 14 and 21 are amended. Claims 15-20 are canceled. Claims 22-26 are added.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The Information Disclose Statements submitted on 8/19/2022, 5/11/2023, 5/12/2023, and 4/3/2024 have been considered by the examiner.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 7/31/2025 has been entered.
Response to Arguments
Claim interpretation under 35 U.S.C. 112(f)
Applicant's arguments filed 11/12/2025 have been fully considered. With respect to the arguments made regarding the 112(f) invocation for “interface generator” of claims 1 and 3, the arguments are persuasive in view of the amended claims and therefore the claim interpretations have been withdrawn.
Further, the arguments made regarding the claim interpretations of “Reacquisition portion” and “Reacquisition module” in claim 7 have been fully considered, but they are not persuasive. Taking “Reacquisition module” in claim 1 as an example, the “Reacquisition module” is configures to determine the cameras which have identifiers which match a tracked object, which constitutes a placeholder modified by functional language, which meets the criteria of Prongs A and B for invoking 35 U.S.C. 112(f) (see MPEP section 2181, section 1 subsections a and b). Further the “Reacquisition module” is not modified by sufficient structure to perform the claimed acts, which meets prong C of the analysis for invoking 35 U.S.C. 112(f). Additionally, “reacquisition portion” of claim 7 follows the same logic as described above for meeting the three prongs of the analysis for invoking 35 U.S.C. 112(f). Therefore, for at least the reasons above the examiner respectfully maintains the claim interpretations under 35 U.S.C. 112(f).
35 U.S.C. 103
Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive. Applicant argues (See pages 9 and 10) that Liu and Higgins, independently or in combination, fail to teach dynamic re-determination of a camera in the camera set as claimed in amended claims 1 and 9. The examiner respectfully disagrees, the claim language of amend claims 1 and 9 recites “ wherein the interface generator responds to user input selecting one of the first icons by; designating the one camera in the peripheral camera set associated with the selected one of the first icons as a second active camera; modifying the peripheral camera set to include a subset of the cameras within the predefined distance from a physical location of the second active camera; and updating the peripheral region to include icons associated with the modified peripheral camera set;”, the steps claimed in amended claim 1 recite broadly a method where after an icon is selected, a camera in the set is also selected as the second, and then the interface and subset of cameras is updated. The claim does not recite that these steps are done in continuum to dynamically update the interface following the object. Further, Liu teaches in [0004] that the user selects a camera, a second camera is then designated as part of that grouping or subset, then in [0056] cameras within a select distance from the first and second cameras can be grouped together, and these cameras have their icons on the map modified. Given the teachings of Liu one of ordinary skill in the art could reasonably have combined the system of Liu with the system of Higgins to arrive at an invention with the capabilities claimed in amended claims 1 and 9. Therefore, for at least the reasons above, the examiner maintains the rejections under 35 U.S.C. 103.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“Interface generator” in claims 1, 3, 15 and 16.
“Reacquisition portion” and “Reacquisition module” in claim 7
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
1. Claims 1-5, 7-12, 14, and 22-26 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 20210223922 A1) in view of Higgins (US 20230064675 A1).
Regarding claim 1 Liu Discloses;
A crime center system, comprising:
a processing hub (Liu, Figure 1, video management system, as stated in the specification, the applicant defines a dispatch processing hub as a cloud based server, [0022] the video management system of Liu is server based) communicatively linked via a communications network (Liu, [0022] the system may be connected to a network) with a client device(Liu, [0027] the system may include more than one client devices) and a plurality of cameras (Liu, Figure 1, Cameras 12);
PNG
media_image1.png
350
426
media_image1.png
Greyscale
PNG
media_image2.png
62
424
media_image2.png
Greyscale
(Liu [0022], emphasis added)
[and on the processing hub, an interface generator comprising at least one processor and memory storing instructions that when executed cause the interface generator togenerate an object-tracking interface for display upon a display device of each of the client devices, wherein the object-tracking interface is configured to include:]
an active camera view window displaying video from a first one of the cameras designated as a first active camera configured to capture a geographic space, the video including an image of a tracked object (Liu, [0037]- [0038] A pop-up window appears with feed corresponding to the selected camera, Figure 4, cameras are configure to capture a geographic space (36, 48));
PNG
media_image3.png
498
424
media_image3.png
Greyscale
([0037]- [0038] Liu, emphasis added)
and a peripheral region extending about an outer edge of the active camera view window and enclosing the video, the peripheral region including first icons, each first icon being associated with one camera of cameras in a peripheral camera set, (Liu, Figure 4, shows an active camera view window (48a-b) as well as a peripheral region enclosing the camera view window which shows the physical locations of additional cameras (36a-d) using icons (first icons), Figure 5 of applicant’s specification shows a similar configuration where the active camera view window pops up and a peripheral region which encloses/surrounds the active camera has a map of the camera icons)
PNG
media_image4.png
636
1002
media_image4.png
Greyscale
(Liu, Figure 4)
(Liu, [0032] camera locations are predetermined, and have a predetermined area of coverage, Liu figure 4 shows spatially where camera icons (first icons) are positioned on the display, Further [0004] of Liu states that the cameras are shown on the map location corresponding to the location in which they are positioned physically, the cameras are places in a predetermined area, meaning the cameras would have a placement with a predetermined distance from one another),
PNG
media_image5.png
202
420
media_image5.png
Greyscale
(Liu, [0032], emphasis added)
[wherein the cameras in the peripheral camera set each have an orientation determined by the interface generator to provide a field of view that captures images of the tracked object when the tracked object moves out along a direction of travel of the geographic space captured by the first active camera,]
wherein the interface generator (Liu, [0024] Work station generates the interface, applicant defines an interface generator as responding to a user input to create an interface, and modify the interface based on user selection of icons, in Liu, the workstation takes a user input or selection and displays video feed accordingly) responds to user input selecting one of the first icons by;
designating the one camera in the peripheral camera set associated with the selected one of the first icons as a second active camera (Liu, [0004] the method may include the user selecting of two or more cameras, where in response to selection the two or more icons will have their active video feed displayed (designated as the active camera));
modifying the peripheral camera set to include a subset of the cameras within the predefined distance from a physical location of the second active camera (Liu, [0056] selected video streams may have cameras within a select distance from one another grouped together as one set or one combined idea stream);
and updating the peripheral region to include icons associated with the modified peripheral camera set (Liu, [0056] the popup windows and icon locations will be updated following this re-grouping in response to the selection); and
updating the object-tracking interface to display video from the second active camera (Liu, [0032] when the user selects a camera, the interface is updated with feed from that camera, Figure 3, multiple camera icons are shown, including a first and second camera with icons).
PNG
media_image6.png
198
430
media_image6.png
Greyscale
(Liu [0032], emphasis added)
PNG
media_image7.png
484
776
media_image7.png
Greyscale
(Liu figure 3)
Liu does not disclose;
and on the processing hub, an interface generator comprising at least one processor and memory storing instructions that when executed cause the interface generator togenerate an object-tracking interface for display upon a display device of each of the client devices, wherein the object-tracking interface is configured to include:
wherein the cameras in the peripheral camera set each have an orientation determined by the interface generator to provide a field of view that captures images of the tracked object when the tracked object moves out along a direction of travel of the geographic space captured by the first active camera,
However, in the same field of endeavor of object tracking using a video system, Higgins teaches;
and on the processing hub, an interface generator comprising at least one processor and memory storing instructions that when executed cause the interface generator togenerate an object-tracking interface for display upon a display device of each of the client devices, wherein the object-tracking interface is configured to include (Higgins, [0032] the interface may display a map where locations of cameras and objects of interest may be displayed, [0004]-[0005] the system captures, stores and processes video data, and displays it using software, which indicates further the system must inherently have the capacity to store using a memory of some kind and process the data using a processor of some kind):
wherein the cameras in the peripheral camera set each have an orientation determined by the interface generator to provide a field of view that captures images of the tracked object when the tracked object moves out along a direction of travel of the geographic space captured by the first active camera (Higgins, [0032] AI or machine learning may detect objects of interest and the camera icons on the display may be organized, or selected or deselected based on objects of interest locations that have been determined, [0033] camera directions or orientations may be updated by the user or AI, indicating that camera field of views can be updated automatically, [0056] cameras may be updated to track objects such as aircrafts, if an object, such as an aircraft comes within a range of a camera, the camera may be automatically imitated to take a video of the aircraft for tracking purposes, given the ability to automatically update the camera orientation and track objects of interest, as well as the ability of the system to initiate video feeds from cameras based upon object proximity to the camera, examiner is interpreting this as being analogous to a system being able to set a camera orientation such that field of view is able to capture an object based on direction of travel),
Liu discloses a surveillance system capable of displaying different camera views via a user selection of different camera icons, however Liu does not disclose an interface for tracking objects or method thereof. In the same field of endeavor, Higgins teaches a system and method of tracking objects using multiple cameras as well as machine learning or AI object detection. Given the disclosure of Liu and teaching of Higgins, one of ordinary skill in the art would have been motivated to combine the two systems because the system of Liu, as disclosed in paragraph [0038], does not provide an optimal set up for tracking an object, however combining the object tracking method and components of Higgins would improve this system. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently claimed invention to combine the systems of Liu and Higgins.
Regarding claim 2 the combination of Liu and Higgins teaches; wherein the first icons are positioned in the peripheral region to provide an indication of the physical locations of the cameras associated with the first icons relative to the physical location of the first active camera (Liu, [0032] Camera icons appear on a map based upon camera location, and when selection display the feed for that camera).
PNG
media_image5.png
202
420
media_image5.png
Greyscale
(Liu, [0032], emphasis added)
Regarding claim 3 the combination of Liu and Higgins teaches; wherein, the subset of the cameras of the modified peripheral camera set are located and properly oriented to capture geographic areas located 360- degrees about the (Higgins, [0038] the system is set up such that the camera selected can display a 360 degree view of the area selected using the camera selected or the cameras around the area (subset of cameras)).
The combination of Liu and Higgins would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The addition of the 360-area surveillance as disclosed by Higgins would allow the area being surveyed to have fewer blind spots so that information is not missed during surveillance (Higgins, [0038] and Abstract).
Regarding claim 4 the combination of Liu and Higgins teaches; wherein the cameras in the peripheral camera set comprise a first set of cameras with orientations for capturing streets if the tracked object is a vehicle (Higgins, [0065] the camera locations or orientations may be changed to capture an incident or object of interest such as a vehicle), or a second set of cameras oriented to capture inside spaces if the tracked object is a pedestrian, and wherein the first set of cameras is different from the second set of cameras.
PNG
media_image8.png
590
380
media_image8.png
Greyscale
(Higgins [0065])
It would have been obvious before the effective filing date of the presently claimed invention to one of ordinary skill of art to add the object tracking methods of Higgins to the surveillance system of Liu. An object tracking method as taught in Higgins would have improved tracking a suspect or object in video of incident. (Higgins [0065])
Regarding claim 5 the combination of Liu and Higgins teaches; wherein the system further includes one or more software mechanisms for determining the direction of travel of the tracked object (Higgins, [0065] the camera locations or orientations may be changed to capture an incident or object of interest such as a vehicle, where the camera orientation changing to turn towards the object or incident would indicate that the direction of travel may be determined)
and wherein the object- tracking interface includes a map-based portion centered at the physical location of the first active camera (Liu, figure 6, map view is centered around camera 36, and other camera icons are around it)
and displaying user-selectable second icons associated with a subset of the cameras determined to have physical locations along a route defined by the direction of travel (Higgins, [0065] the camera locations or orientations may be changed to capture an incident or object of interest such as a vehicle, where the camera orientation changing to turn towards the object or incident would indicate that the direction of travel may be determined, Higgins figure 7 shows the flow of determining which cameras to use in a set where multiple camera icons are shown, such that selected cameras are turned towards the direction of the object or incident).
It would have been obvious before the effective filing date of the presently claimed invention to one of ordinary skill of art to add the object tracking and direction of travel prediction methods of Higgins to the surveillance system of Liu. The inventions of Liu and Higgins both lie in the same field of endeavor as that of the presently claimed invention, therefore the motivation to add the methods of identifying direction of travel of an object, and subsequently having camera icons be displayed in that direction would be advantageous to easier tracking of a suspect. (Higgins [0065]) (Liu [0046]- [0049])
Regarding claim 7 the combination of Liu and Higgins teaches; wherein the object-tracking interface includes a reacquisition portion displaying identifiers of one or more of the cameras that have feeds with an object providing a potential match to the tracked object (Higgins, [0005] the system may utilize AI or machine learning to detect and track objects of interest, objects current and historical position information is used to predict the objects movement over time, further the system can sort, select or deselect camera based on an object of interest)
and wherein the one or more of the cameras are determined by an AI-based tracked object reacquisition module in the system that processes the feeds using AI-driven meta-attributes associated with the tracked object to determine potential matches (Higgins, [0063] AI may be used to track and classify objects of interest, the AI model may classify objects as well as determine size, shape, position, and future direction. Since the system is capable of acquiring object traits from tracked objects, and predict future movements to track the object across camera feeds the system would reasonably be able to reacquire an object when it moves from one camera view to another, as well as store attributes (color, shape, size, position) about the object).
The combination of Liu and Higgins would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The camera selection and display method of Higgins would allow for tracking of the object by showing a user which cameras are currently detecting an object, improving the user’s awareness of the object position in an automated way. (Higgins, [0005])
Regarding claim 8 the combination of Liu and Higgins teaches; wherein the AI-based tracked object reacquisition module only process the feeds from cameras in the peripheral camera set (Higgins, [0005] the system may utilize AI or machine learning to detect and track objects of interest, objects current and historical position information is used to predict the objects movement over time, further the system can sort, select or deselect camera based on an object of interest, [0065] the display of camera feeds and video may be specific to the area of interest defined by the user, the system may alert the user that an object of interest has been detected outside of the specified region of interest on the map and then further provide location and then use cameras the dynamically track the object. Given that the system can selectively show cameras and camera feeds based upon an object of interest, and detect the object in additional peripheral camera’s than capture data from those cameras dynamically, this would be analogous to using an AI object reacquisition unit to process peripheral camera data which contains the object of interest).
It would have been obvious before the effective filing date of the presently claimed invention to one of ordinary skill of art to add in AI video processing to select video feeds of choice for processing based upon relevance to an incident or object of choice as taught by Higgins to the system of Liu. Further, improving upon the object/subject tracking method, processing and displaying the camera feeds of interest would help the user to track objects more effectively. (Higgins [0005]) (Liu [0023], [0024], [0032] and [0033])
Regarding claim 9 the combination of Liu and Higgins teaches; An object-tracking method, comprising:
identifying an object in a video feed from a first camera to be tracked (Higgins, [0005] objects of interest to be tracked can be displayed on the map for the user to see based upon the object being detected using video or image data);
determining a peripheral camera set each having a physical location within a predefined distance about a physical location of the first camera (Higgins, [0005] cameras may be selected, deselected, or sorted based upon a tracked object, further, [0038] cameras in an area may be used together to generate a 360-degree panorama of an area, which indicates they are within a defined distance of one another);
PNG
media_image9.png
172
380
media_image9.png
Greyscale
(Higgins, [0005])
generating, using an interface generator comprising at least one processor and memory storing instructions, a graphical user interface including an active camera view window displaying a feed from the first camera, the active camera view window further including (Liu, [0005] processor in communication with a memory to save, process and store video feeds, Figure 4, shows an active camera view window (48a-b) as well as a peripheral region enclosing the camera view window which shows the physical locations of additional cameras (36a-d) using icons (first icons), Figure 5 of applicant’s specification shows a similar configuration where the active camera view window pops up and a peripheral region which encloses/surrounds the active camera has a map of the camera icons, ):
PNG
media_image4.png
636
1002
media_image4.png
Greyscale
(Liu, Figure 4)
a peripheral region extending about an outer edge of the active camera view window (Liu, Figure 4, shows an active camera view window (48a-b) as well as a peripheral region enclosing the camera view window which shows the physical locations of additional cameras (36a-d) using icons (first icons), Figure 5 of applicant’s specification shows a similar configuration where the active camera view window pops up and a peripheral region which encloses/surrounds the active camera has a map of the camera icons);
a first icon associated with each camera of camera in the peripheral camera set ; (Liu, [0024], Work station has an interface to facilitate interaction with the camera and camera feeds, [0032] icons for each camera are shown on the interface, [0034] multiple icons may appear on a selection bar on the edge of the screen as well as multiple camera icons appearing in on the interface which encompasses the camera view window)
the first icons associated with the cameras in the peripheral camera set being positioned in the peripheral region of the active camera view window to provide an indication of physical positioning of the peripheral camera set relative to the first camera (Liu, [0032] camera locations are predetermined, and have a predetermined area of coverage, Liu figure 4 shows spatially where camera icons (first icons) are positioned on the display, Further [0004] of Liu states that the cameras are shown on the map location corresponding to the location in which they are positioned physically, the cameras are places in a predetermined area, meaning the cameras would have a placement with a predetermined distance from one another);
wherein the cameras in the peripheral camera set each have an orientation determined by the interface generator to provide a field of view that capture images of the tracked object when the tracked object moves along a direction of travel out of a geographic space captured by the first active camera (Higgins, [0032] AI or machine learning may detect objects of interest and the camera icons on the display may be organized, or selected or deselected based on objects of interest locations that have been determined, [0033] camera directions or orientations may be updated by the user or AI, indicating that camera field of views can be updated automatically, [0056] cameras may be updated to track objects such as aircrafts, if an object, such as an aircraft comes within a range of a camera, the camera may be automatically imitated to take a video of the aircraft for tracking purposes, given the ability to automatically update the camera orientation and track objects of interest, as well as the ability of the system to initiate video feeds from cameras based upon object proximity to the camera, examiner is interpreting this as being analogous to a system being able to set a camera orientation such that field of view is able to capture an object based on direction of travel);
monitoring for user input selecting one of the first icons in the peripheral region (Liu, [0020] selection of multiple icons results in a response of the interface, there is at least a first and second icon, [0024] facilitation of the user inputting a selection to the system), and when the user input is identified based on the monitoring,
designating the one camera in the peripheral camera set associated with the selected one of the first icons as a second active camera (Liu, [0004] the method may include the user selecting of two or more cameras, where in response to selection the two or more icons will have their active video feed displayed (designated as the active camera));
modifying the peripheral camera set to include a subset of the cameras within the predefined distance from a physical location of the second active camera (Liu, [0056] selected video streams may have cameras within a select distance from one another grouped together as one set or one combined idea stream);
and updating the peripheral region to include icons associated with the modified peripheral camera set (Liu, [0056] the popup windows and icon locations will be updated following this re-grouping in response to the selection); and
the graphical user interface to provide a video feed from a second camera associated with the selected one of the first icons in the active camera view window (Liu, [0024] the interface responds with a pop up of video feed when the user selects a camera icon),
It would have been obvious before the effective filing date of the presently claimed invention to one of ordinary skill of art to add the object tracking and camera prediction methods of Higgins to the surveillance system of Liu. A camera prediction method as taught in Higgins would have improved tracking a suspect or object in video of incident by allowing only camera views relevant to an incident to be displayed. (Higgins [0005]) (Liu [0023], [0024], [0032] and [0033])
Regarding claim 10 the combination of Liu and Higgins teaches; wherein the determining of the peripheral camera set further comprises processing orientations of the in the peripheral set for matches with one or more orientations providing fields of view for capturing images of the tracked object (Higgins, [0065] the camera locations or orientations may be changed to capture an incident or object of interest such as a vehicle, where the camera orientation changing to turn towards the object or incident would indicate that the direction of travel may be determined).
It would have been obvious before the effective filing date of the presently claimed invention to one of ordinary skill of art to add the object tracking and direction of travel prediction methods of Higgins to the surveillance system of Liu. The inventions of Liu and Higgins both lie in the same field of endeavor as that of the presently claimed invention, therefore the motivation to add the methods of identifying direction of travel of an object, and subsequently having camera icons be displayed in that direction would be advantageous to easier tracking of a suspect. (Higgins [0065]) (Liu [0046]- [0049])
Regarding claim 11 the combination of Liu and Higgins teaches; wherein the generating the graphical user interface includes determining the direction of travel of the object (Higgins, [0065] the camera locations or orientations may be changed to capture an incident or object of interest such as a vehicle, where the camera orientation changing to turn towards the object or incident would indicate that the direction of travel may be determined),
wherein the graphical user interface includes a map-based portion centered at the physical location of the first camera (Liu, figure 6, map view is centered around camera 36, and other camera icons are around it),
and wherein the map-based portion includes at least one second icon associated with one of the in the peripheral camera set with a physical position along a route of the object predicted based on the direction of travel (Higgins, [0065] the camera locations or orientations may be changed to capture an incident or object of interest such as a vehicle, where the camera orientation changing to turn towards the object or incident would indicate that the direction of travel may be determined, Higgins figure 7 shows the flow of determining which cameras to use in a set where multiple camera icons are shown, such that selected cameras are turned towards the direction of the object or incident).
PNG
media_image10.png
646
1034
media_image10.png
Greyscale
(Liu Figure 6)
PNG
media_image11.png
466
728
media_image11.png
Greyscale
(Higgins, figure 7)
It would have been obvious before the effective filing date of the presently claimed invention to one of ordinary skill of art to add the object tracking and camera prediction methods of Higgins to the surveillance system of Liu. A user interface with the camera locations displayed and showing additional camera icons in the direction of the tracked object eases the burden of tracking the object on the user. (Higgins [0005] and [00065]) (Liu figure 6)
Regarding claim 12 the combination of Liu and Higgins teaches; further comprising monitoring the map-based portion for user selection of the at least one second icon and, in response, modifying the graphical user interface to provide a video feed, in the active camera view window, from a camera associated with the at least one second icon in the map-based portion. (Liu, [0040], Upon receiving selection of the camera icon from the user clicking one of the multiple camera icons (at least a first and a second icon) on the map, a corresponding camera view popup will appear on the interface showing the footage from that camera).
Regarding claim 14 the combination of Liu and Higgins teaches; via the graphical user interface receiving a reacquisition request (Higgins,[0064] objects of interest may be identified by the user to be tracked, the system then goes through the cameras and detects if the object of interest has been identified and if not another camera feed will be searched [0065] alerts will be sent to the user to alert the user that the object of interest has been identified in different camera feed) and, in response, processing video feeds from each camera in the peripheral camera set for a potential match to the object based on AI-driven meta attributes associated with the object (Higgins, [0065] alerts will be sent to the user to alert the user that the object of interest has been identified in different camera feed).
It would have been obvious before the effective filing date of the presently claimed invention to one of ordinary skill of art to add the object tracking and direction of travel prediction methods of Higgins to the surveillance system of Liu. The inventions of Liu and Higgins both lie in the same field of endeavor as that of the presently claimed invention, therefore the motivation to add the methods of identifying direction of travel of an object, and subsequently having camera icons be displayed in that direction would be advantageous to easier tracking of a suspect. (Higgins [0064] and [0065]) (Liu [0046]- [0049])
Regarding claim 22 the combination of Liu and Higgins teaches; The system of claim 1, wherein the cameras in the peripheral camera set are determined based at least on the cameras in the peripheral camera set each having the physical location within the predefined distance of the first active camera (Higgins, [0032] AI or machine learning may detect objects of interest and the camera icons on the display may be organized, or selected or deselected based on objects of interest locations that have been determined, [0033] camera directions or orientations may be updated by the user or AI, indicating that camera field of views can be updated automatically, [0056] cameras may be updated to track objects such as aircrafts, if an object, such as an aircraft comes within a range of a camera, the camera may be automatically imitated to take a video of the aircraft for tracking purposes, given the ability to automatically update the camera orientation and track objects of interest, as well as the ability of the system to initiate video feeds from cameras based upon object proximity to the camera, examiner is interpreting this as being analogous to a system being able to set a camera orientation such that field of view is able to capture an object based on direction of travel) and having the orientation to capture images of the tracked object when the tracked object moves along the direction of travel (Higgins, [0065] the camera locations or orientations may be changed to capture an incident or object of interest such as a vehicle, where the camera orientation changing to turn towards the object or incident would indicate that the direction of travel may be determined).
The combination of Liu and Higgins would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Higgins allows the user to have an object of interest tracked across multiple camera feeds, and then cameras/camera icons/camera feeds will be sorted, selected or deselected on the map interface based upon whether the object of interest is tracked (Higgins [0005]). The addition of this capability to the system of Liu would create a more efficient method of tracking an object given that the peripheral camera set (additional cameras where the object is tracked) would be determined for the user automatically. (Higgins [0005], [0032], [0033], [0056] and [0065])
Regarding claim 23 the combination of Liu and Higgins teaches; The system of claim 22,wherein the direction of travel comprises a current direction of travel of the tracked object and one or more changed direction of travels (Higgins, [0063] AI may be used to track and classify objects of interest, the AI model may classify objects as well as determine size, shape, position, and future direction, the system is capable of generating a current position, as well as track the objects current direction (tracking the object across cameras in real time) and then predict its future direction (AI prediction)).
The combination of Liu and Higgins would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Higgins allows the user to have an object of interest tracked across multiple camera feeds, and then cameras/camera icons/camera feeds will be sorted, selected or deselected on the map interface based upon whether the object of interest is tracked (Higgins [0005]). Further, the system of Higgins allows the object’s future position to be predicted, which would aid the user in real time monitoring of a situation. The addition of these capabilities to the system of Liu would allow more effective tracking of incidents and suspects in real time. (Higgins [0005], [0032], [0033], [0056] and [0065])
Regarding claim 24 the combination of Liu and Higgins teaches; The system of claim 22, wherein the peripheral camera set omits an additional camera having an additional physical location within the predefined distance of the first active camera and an additional orientation directed away from an area of interest associated with the tracked object (Higgins, [0005] cameras may be prioritized, selected, deselected, or sorted based upon their locations, and AI object detection, [0034] the system may detect an object automatically when multiple cameras have the object in view, [0063] cameras that have an object of interest detected in their view may have a system alert sent to prioritize recording from those cameras, The ability to select and deselect the camera view being displayed based upon object detection automatically is being interpreted as omitting a camera that is not at a location or orientation to view an object or incident of interest, further the system’s ability to automatically re-orients cameras to capture an object of interest indicates that the system can determine an object of interests direction and position and determine an orientation needed for capturing the object, since the system automatically prioritizes camera’s that are capable of showing an object, those who’s orientations are unable to display the object would also be deselected/omitted).
The combination of Liu and Higgins would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Higgins allows the user to have an object of interest tracked across multiple camera feeds, and then cameras/camera icons/camera feeds will be sorted, selected or deselected on the map interface based upon whether the object of interest is tracked (Higgins [0005]). The addition of this capability to the system of Liu would create a more efficient method of tracking an object given that the peripheral camera set (additional cameras where the object is tracked) would be determined for the user automatically. (Higgins [0005], [0032], [0033], [0056] and [0065])
Regarding claim 25 the combination of Liu and Higgins teaches; The system of claim 22,wherein the cameras in the peripheral camera set are further determined based at least on an object type of the tracked object (Higgins, [0026] tasks of the system such as scanning for an object of interest may be set in the system, and the system may prioritize the camera’s assigned to the task based upon the task, where the task include scanning for objects, or tracking animals or tracking environmental changes, given that these are three examples of tracking different objects (generic object, an animal or an environmental change like a fire or storm) the cameras would be prioritized based upon location/proximity to these objects/events of different types, which would be analogous to setting a camera set based upon object/event type being detected, Further, [0059] notes that thermal cameras may be employed by the system to be used for certain type of object/event tracking such as tracking a natural disaster, this further indicates camera selection being based on object type in certain cases).
The combination of Liu and Higgins would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The addition of the camera selection method of Higgins would allow the system of Liu to implement different cameras or different camera types dependent on the desired object or event of interest to be tracked. This would be advantageous to the system because it would allow the user to track specific objects and events more effectively using camera selection. (Higgins, [00026] and [0059])
Regarding claim 26 the combination of Liu and Higgins teaches; The system of claim 1, wherein the predefined distance comprises at least one of:
an inner diameter set by an edge of the geographic space captured by the first active camera (Higgins, [0045] and [0046] as well as Figure 5 shows a panoramic view using multiple cameras proximate to each other indicating there is an overlap in field of view, so an active camera in that set would have peripheral camera’s within predefined distance of the field of view of the first camera, further, Figure 9 shows multiple cameras, where a first active camera has a field of view, and multiple peripheral cameras are within this camera’s field of view),
PNG
media_image12.png
328
550
media_image12.png
Greyscale
(Higgins Figure 5)
PNG
media_image13.png
288
536
media_image13.png
Greyscale
(Higgins figure 9 emphasis added)
or an outer diameter set to define a ring thickness.
The combination of Liu and Higgins would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The method of creating a set of cameras within a distance from one another corresponding to a field of view of an active camera allows for maximum capturing capacity and situational awareness for a target incident or object by allowing the fields of view of the camera’s to be combined to capture more footage of a desired location. This would have improved the system of Liu by allowing for more data to be captured at a desired location. (Higgins Figure 5 and 9, and [0044]- [0047])
2. Claim 6 is rejected under 35 U.S.C. 103 as being obvious over Liu (US 20210223922 A1) in view of Higgins (US 20230064675 A1) in further view of DeCharms (US 20140368601 A1).
Regarding claim 6 the combination of Liu and Higgins teaches; wherein the system further includes one or more software mechanisms for determining the direction of travel and a speed of the tracked object (Higgins, [0065] the system may predict how an object of interest may change over time, for example, if the system detects a fire, it may predict where the fire will next spread, which indicates the system is able to predict a next location of an object of interest, which can be any object of interest [0065], further it states in the example use it may predict rate of a fire spreading, which would indicate it can assess the speed at which the object (in this case a fire being remoted monitored) is moving)
and for determining a time when the tracked object will move into view of one or more cameras positioned nearby to the first active camera based on the direction of travel (Higgins, [0063] an objects future change of direction and position can be determined, [0065] the system may predict how an object of interest may change over time, for example, if the system detects a fire, it may predict where the fire will next spread, which indicates the system is able to predict a next location of an object of interest, which can be any object of interest [0065]) and the speed of the tracked object (Higgins, [0065] the system may predict how an object of interest may change over time, for example, if the system detects a fire, it may predict where the fire will next spread, which indicates the system is able to predict a next location of an object of interest, which can be any object of interest [0065], further it states in the example use it may predict rate of a fire spreading, which would indicate it can assess the speed at which the object (in this case a fire being remoted monitored) is moving).
The combination of Liu and Higgins would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Higgins allows for an objects path, speed and future locations to be predicted, which would improve the user’s ability to track objects of incidents of interest. (Higgins [0005] and [0063]- [0066]).
Lui, and Higgins fail to teach; and wherein the object-tracking interface includes a countdown clock providing an indicator of the time.
However, DeCharms teaches; interface includes a countdown clock providing an indicator of the time ([0202]- [0204] Interface includes an idle timer and a “safe timer” for indicator of incident duration as well as time till responder arrives).
PNG
media_image14.png
440
286
media_image14.png
Greyscale
(DeCharms Figure 9a Emphasis added)
PNG
media_image15.png
604
512
media_image15.png
Greyscale
(DeCharms [0202] emphasis added)
DeCharms teaches a timer, displayed on a surveillance system GUI that shows times to incident resolution and times till a responder arrives. One of ordinary skill in the art would have been motivated to combine the method and systems in Liu and Higgins, with the teachings of DeCharms because having the ability to predict which camera’s an object will move in view of using the speed and direction of the object’s travel and then displaying a timer to mark the time of the incident and countdown till responders arrive (as taught in DeCharms) would be advantageous in documenting an incident more thoroughly, as well as increasing the likelihood of identifying a suspect. Therefore, it would have been obvious to one of ordinary skill in the art to combine the method and systems in Liu and Higgins and the teachings of DeCharms to create a system with the same functional capabilities as taught in claim 6. (DeCharms [0202]- [0204], Higgins [0005] and [0061]- [0065]))
3. Claims 13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 20210223922 A1) in view of Higgins (US 20230064675 A1), and in further view of Ishigami (EP 1768412 A1)
Regarding claim 13 the combination of Liu and Higgins does not teach; further including, in response, modifying the map-based portion to be centered at the physical location of the camera associated with the at least one second icon.
However, in the same field of endeavor, Ishigami teaches; further including, in response, modifying the map-based portion to be centered at the physical location of the camera associated with the at least one second icon (Ishigami, [0041] operation management section (140)(Interface generator) transmits a signal controlling the map so the camera icon representing the target camera is positioned at the center of the window, Figures 7 A and 7B show the target camera being centered on the map (C1, Target camera centered on the map)).
PNG
media_image16.png
136
372
media_image16.png
Greyscale
(Ishigami, [0041])
PNG
media_image17.png
492
780
media_image17.png
Greyscale
(Ishigami, Figure 7 Emphasis added)
The combination of Liu, Higgins, and Ishigami would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the recentering operation of Ishigami would make switching between camera’s easier on the user by automatically centering the map view around of the camera. (Ishigami [0009]- [0010] and [0042])
Regarding claim 21 the combination of Liu, Higgins and Ishigami teaches; The system of claim 5, wherein each second icon the user-selectable second icons, when selected (Ishigami, [0049]-[0050] the user can select icons to adjust map view), cause the interface generator to make a camera associated with the selected second icon the second active camera and to modify the map-based portion to be re-centered at a physical location of the camera associated with the selected second icon (Ishigami, [0041] operation management section (140)(Interface generator) transmits a signal controlling the map so the camera icon representing the target camera is positioned at the center of the window, Figures 7 A and 7B show the target camera being centered on the map (C1, Target camera centered on the map)).
The combination of Liu, Higgins, and Ishigami would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the recentering operation of Ishigami would make switching between camera’s easier on the user by automatically centering the map view around of the camera. (Ishigami [0009]- [0010] and [0042])
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing analogous art please see the attached PTO-892 Notice of References Cited.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666