Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed February 10, 2026 have been fully considered but they are not persuasive.
Regarding claim 1,
applicant states that “Wan relates to a method for target positioning and tracking using capture images from multiple cameras.”. Examiner disagrees with this statement. Although Wan discloses using multiple cameras for target positioning and tracking, each camera in Wan’s system can perform the target positioning and tracking claimed in claim 1.
applicant states that “Although Wan uses an electronic map for target positioning and tracking, the map involved therein mainly provides a spatial framework that connects the positions of multiple cameras in the geographic area to construct a search graph. The map information disclosed by Wan is primarily a "search graph" and does not involve specific road areas.”. Examiner disagrees with this statement. Wan discloses “a target region of an electronic map corresponding to the actual geographic region (Page 8 3rd paragraph)”, which implies that the map includes specific road areas.
applicant states that “the implementation of Wan's technical solution relies on a plurality of road cameras arranged in an actual geographic area, rather than being applied to the movable platform”. Examiner disagrees with this statement. Although Wan’s cameras are arranged in an actual geographic area, Driessan discloses cameras on a movable platform for object tracking (This thesis is focused mainly on the tracking aspect of the see-and-avoid problem for unmanned aerial vehicles (UAVs). Abstract). It would have been prima facie obvious to one of ordinary skill in the art to combine the teachings of Wan and Driessen to track a target to be tracked via a photographing device carried on the movable platform in order to develop a sufficiently effective real-time navigation system (KSR Rationale G).
Response to Amendment
The Amendment of February 10, 2026 overcomes the following objections:
Objections of claims 5 and 13 because of informalities.
Claim Objections
Claim 3 is objected to because of the following informalities:
Claim 3 as recited is not consistent with “at least one of (A) or (B)”. For the record, the examiner recommends claim 3 to be rewritten as follow, and interpretation will be as such until clarification is made of record or applicant accepts this proposal and makes changes accordingly.
3. The method according to claim 2, wherein the obtaining of the target map area and the motion information of the target to be tracked at the moment of the loss of tracking of the target to be tracked comprises at least one of (A) or (B), wherein: (A) obtaining relative position information of the target to be tracked relative to [[a]]the movable platform at the moment of the loss of tracking and position information of the movable platform, and determining the position information of the target to be tracked at the moment of the loss of tracking based on the relative position information and the position information of the movable platform; [[or]] or (B) obtaining a plurality frames of [[the]] first information by the photographing device, and determining the velocity information of the target to be tracked at the moment of the loss of tracking based on the plurality frames of the first information.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-16 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wan (Chinese Patent Publication No.: CN106651916A), hereinafter Wan, in view of Li (Chinese Patent Publication No.: CN103903282A), hereinafter Li, further in view of Driessen (Object Tracking in a Computer Vision based Autonomous See-and-Avoid System for Unmanned Aerial Vehicles, Master’s Thesis in Computer Science at the School of Vehicle Engineering, Royal Institute of Technology year 2004), hereinafter Driessen.
Regarding claim 1, Wan teaches a target tracking method for a movable platform, comprising: a target to be tracked (S103 performs a target search in the target area of the electronic map corresponding to the actual geographical area according to the preset attribute and the first attribute information of the searched target to be searched, and obtains an operation tracing track corresponding to the target to be tracked. Page 8 3rd paragraph) via a photographing device (The method comprises the steps of: acquiring snapshot information of a plurality of cameras arranged in an actual geographic region. Abstract) a target map area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract) and motion information of the target to be tracked (At this time, the total length of the two paths is judged and the path with the total length path is determined as the motion path. Page 12 5th paragraph), wherein the target to be tracked is in the target map area (S103 performs a target search in the target area of the electronic map corresponding to the actual geographical area according to the preset attribute and the first attribute information of the searched target to be searched, and obtains an operation tracing track corresponding to the target to be tracked. Page 8 3rd paragraph); the target map area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract) comprises a section of a road in a map (Among them, the positioning and tracking method of the object provided by the embodiment of the present invention also needs to load the initialization information including map information, monitor camera information, target area information, and personnel information. Where the map information may be a map in a preset area acquired by the map plug-in from the network for drawing a layout area, a person track display, etc. Page 8 5th paragraph) and at least one road area on the section of the road (The main direction of the movement is to monitor the direction of the camera in the opposite direction, and follow the direction of the street. Page 8 5th paragraph); matching (S103 performs a target search in the target area of the electronic map corresponding to the actual geographical area according to the preset attribute and the first attribute information of the searched target to be searched, and obtains an operation tracing track corresponding to the target to be tracked. Page 8 3rd paragraph), based on the motion information (the first attribute information at least comprises first position information, first time information and first direction information. Abstract), a target road area from the at least one road area(according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract); and searching (S103 performs a target search in the target area of the electronic map corresponding to the actual geographical area according to the preset attribute and the first attribute information of the searched target to be searched, and obtains an operation tracing track corresponding to the target to be tracked. Page 8 3rd paragraph), via the photographing device (The method comprises the steps of: acquiring snapshot information of a plurality of cameras arranged in an actual geographic region. Abstract), for the target to be tracked based on the motion information (At this time, the total length of the two paths is judged and the path with the total length path is determined as the motion path. Page 12 5th paragraph) and the target road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Wan does not teach the following limitations as further recited, but Li further teaches obtaining target area and motion information of the target to be tracked at a moment of [[the]] loss of tracking (when the target pixel loss number is greater than 90, the target is occluded, and step 3 feature extraction and feature matching are performed. Page 4 3rd paragraph. Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph), wherein the target to be tracked is in the target area at the moment of loss of tracking (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph); and searching, for the target to be tracked that is lost from tracking based on the motion information and the target area (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wan to incorporate the teachings of Li to search for the target to be tracked that is lost from tracking based on the motion information and the target area in order to enable the occluded target to be accurately and stably tracked, and at the same time improve tracking speed and improve tracking.
The combination of Wan and Li does not teach the following limitations as further recited, but Driessen further teaches tracking [[the]]a target to be tracked via a photographing device (In our case, the sensor is a video camera and the output from the sensor is digital images. Page 13 3rd paragraph) carried on the movable platform (This thesis is focused mainly on the tracking aspect of the see-and-avoid problem for unmanned aerial vehicles (UAVs). Abstract).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Driessen to track a target to be tracked via a photographing device carried on the movable platform in order to develop a sufficiently effective real-time navigation system.
Regarding claim 2, Li in the combination teaches the method according to claim 1, wherein the motion information comprises position information and velocity information of the target to be tracked at the moment of the loss of tracking (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph).
Regarding claim 3, Li in the combination teaches the method according to claim 2, wherein the obtaining of the target map area and the motion information of the target to be tracked at the moment of the loss of tracking of the target to be tracked comprises at least one of (A) or (B) (Note: the claim language is interpreted disjunctively.), wherein: (A) obtaining relative position information of the target to be tracked relative to [[a]]the movable platform at the moment of the loss of tracking and position information of the movable platform, and determining the position information of the target to be tracked at the moment of the loss of tracking based on the relative position information and the position information of the movable platform; [[or]] or (B) obtaining a plurality frames of [[the]] first information by the photographing device (firstly, the initial frame is made into a template. Page 2 6th paragraph. The next frame matches the search, and then the step 4 Kalman prediction is performed. Page 2 6th paragraph), and determining the velocity information of the target to be tracked at the moment of the loss of tracking based on the plurality frames of the first information (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph).
Regarding claim 4, Wan in the combination teaches the method according to claim 1, wherein the matching, based on the motion information, of the target road area from the at least one road area comprises: obtaining (Among them, the positioning and tracking method of the object provided by the embodiment of the present invention also needs to load the initialization information including map information, monitor camera information, target area information, and personnel information. Page 8 5th paragraph), from the map (Where the map information may be a map in a preset area acquired by the map plug-in from the network for drawing a layout area, a person track display, etc. Page 8 5th paragraph), [[a]] the target map area corresponding to the position information in the motion information (And searches the target area of the electronic map corresponding to the actual geographical area in accordance with the preset search interval and the first attribute information of the searched target to be searched to obtain an operation tracking track corresponding to the target to be tracked. Page 2 1st paragraph); and matching (S103 performs a target search in the target area of the electronic map corresponding to the actual geographical area according to the preset attribute and the first attribute information of the searched target to be searched, and obtains an operation tracing track corresponding to the target to be tracked. Page 8 3rd paragraph), based on the motion information (the first attribute information at least comprises first position information, first time information and first direction information. Abstract), the target road area from the at least one road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Regarding claim 6, Wan in the combination teaches the method according to claim 4, wherein the matching, based on the motion information, of the target road area from the at least one road area comprises: matching (S103 performs a target search in the target area of the electronic map corresponding to the actual geographical area according to the preset attribute and the first attribute information of the searched target to be searched, and obtains an operation tracing track corresponding to the target to be tracked. Page 8 3rd paragraph), based on the motion information (the first attribute information at least comprises first position information, first time information and first direction information. Abstract), a target road area where the target to be tracked is located from the at least one road area in the target map area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Li in the combination further teaches matching, based on the motion information, a target area where the target to be tracked is located at the moment of the loss of tracking (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph).
Regarding claim 7, Wan in the combination teaches the method according to claim 6, wherein the matching, based on the motion information, of the target road area where the target to be tracked is located at the moment of the loss of tracking from the at least one road area in the target map area comprises at least one of (A) or (B): (A) determining a distance error between the target to be tracked and each road area in the target map area based on the position information of the target to be tracked (Searching the route of the search map based on the first position information of the target to be tracked and a preset search interval to obtain a first preset first position of the object to be tracked in the search map structure information. Page 4 2nd paragraph. Determining whether the first preset first position information is the target first position information of the target to be tracked. Page 4 3rd paragraph. It is common knowledge determining whether two positions match each other involves determining a distance error between the two positions.), determining a matching priority of each road area based on the distance error, sequentially selecting [[the]]a road area as a candidate road area based on the matching priority (Searching the route of the search map based on the first position information of the target to be tracked and a preset search interval to obtain a first preset first position of the object to be tracked in the search map structure information. Page 4 2nd paragraph. Determining whether the first preset first position information is the target first position information of the target to be tracked. Page 4 3rd paragraph. It is common knowledge that multiple areas can be selected to be compared to a target location with assigned matching priorities based on the distance error from the target location.), and determining an angle error (It is common knowledge determining whether two directions match each other involves determining an angle error between the two directions.) between a driving direction corresponding to the candidate road area(Searching the route (which indicates a driving direction) of the search map based on the first position information of the target to be tracked and a preset search interval to obtain a first preset first position of the object to be tracked in the search map structure information. Page 4 2nd paragraph) and a motion direction of the target to be tracked (And the first time information and the target first position information and the corresponding target first direction information and the target first time information are obtained based on the first position information and the corresponding first direction information, the first time information. Page 3 4th paragraph), and determining the candidate road area(Determining whether the first preset first position information is the target first position information of the target to be tracked. Page 4 3rd paragraph) upon determining that the angle error is less than or equal to a first threshold (It is common knowledge that whether two directions match each other can be determined by an angle error smaller than a threshold between the two directions.); or (B) determining an angle error between a motion direction of the target to be tracked at the moment of the loss of tracking and a corresponding driving direction of each road area in the target map area, determining a matching priority of each road area based on the angle error, sequentially selecting [[the]]a road area as a candidate road area based on the matching priority, determining a distance error between the target to be tracked and the candidate road area candidate road area
Li in the combination further teaches the position information of the target to be tracked at the moment of the loss of tracking (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph).
Regarding claim 8, Driessen in the combination teaches the method according to claim 1, further comprising: obtaining position information (Accurate data from the navigation system of the UAV (which includes the position information of the unmanned aerial vehicle) will be available to the see-and-avoid-system. Abstract) of [[a]]the movable platform (This thesis is focused mainly on the tracking aspect of the see-and-avoid problem for unmanned aerial vehicles (UAVs). Abstract); and obtaining the map based on the position information of the movable platform (Accurate data from the navigation system of the UAV (i.e., the map) will be available to the see-and-avoid-system. Abstract).
Regarding claim 9, Wan in the combination teaches the method according to claim 1, further comprising obtaining first information comprising the target to be tracked (And searches the target area of the electronic map corresponding to the actual geographical area in accordance with the preset search interval and the first attribute information of the searched target (which includes the position information in the motion information) to be searched to obtain an operation tracking track corresponding to the target to be tracked. Page 2 1st paragraph) and the map comprising the target map area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract), wherein the map and the first information meetone of: the map comprises a vector map or the first information comprises a first image (acquiring snapshot information of a plurality of cameras arranged in an actual geographic region, wherein the snapshot information comprises snapshot target images and first attribute information of snapshot targets relative to the cameras. Abstract).
Regarding claim 10, Driessen in the combination teaches the method according to claim 1, wherein the searching, via the photographing device, for the target to be tracked that is lost from tracking based on the motion information and the target road areacomprises: adjusting at least one of a collection parameter of the photographing device on [[a]]the movable platform or a position of the movable platform based on (What complicates matters is that an object has to be classified as early as possible so there is enough time to take suitable measures (i.e., to change direction or reduce the speed of an aircraft thus adjusting a position of the movable platform based on at least a driving direction corresponding to the target area and a motion speed of the target to be tracked). Page 34 1st paragraph. The pilot in command of an aircraft operated according to the visual flight rules has the responsibility to see and avoid any other air traffic in his vicinity (i.e., to change direction or reduce the speed of an aircraft). Page 6 1st paragraph); obtaining second information collected by the photographing device following the adjusting of the at least one of the collection parameter of the photographing device on the movable platform or the position of the movable platform (
PNG
media_image1.png
320
570
media_image1.png
Greyscale
); identifying at least one target object based on the second information (After potentially dangerous objects have been located in the image using the methods described above, it is necessary to facilitate for the oncoming tracking of the objects. Page 44 3rd paragraph).
Li in the combination further teaches searching for the target to be tracked that is lost from tracking based on the target area, the motion information of the target to be tracked at the moment of the loss of tracking, and motion information of the at least one target object (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph).
Wan in the combination further teaches the target road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Regarding claim 11, Li in the combination teaches the method according to claim 10, wherein the adjusting of the collection parameter of the photographing device on the movable platform based on road area and the motion speed of the target to be tracked at the moment of the loss of tracking comprises: predicting a target motion direction of the target to be tracked based on the driving direction corresponding to the target road area and the motion speed of the target to be tracked at the moment of the loss of tracking (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph).
Driessen in the combination further teaches adjusting the collection parameter of the photographing device on the moveable platform (What complicates matters is that an object has to be classified as early as possible so there is enough time to take suitable measures (which includes slow down the moveable platform and thus change its distance to the target, i.e., an adjustment to the collection parameter as distance change will impact image collection.). Page 34 1st paragraph) based on the target speed direction predicted (What is known is the bearing to the object (i.e. the angle between the flight path of the UAV and the obstacle) and the rate of growth for the object, which makes it possible to tell how much the distance has changed between two points in time. Page 38 last paragraph).
Regarding claim 12, Driessen in the combination teaches the method according to claim 10, wherein comprises a second image (
PNG
media_image1.png
320
570
media_image1.png
Greyscale
), and the collection parameter comprises a photographing direction and a focal length (A longer focal length gives you a greater magnification, but also a narrower field of view (i.e., photographing direction), while a shorter focal length gives you a wider field of view but everything appears smaller. Page 15 1st paragraph).
Regarding claim 13, Driessen in the combination teaches the method according to claim 10, wherein the adjusting of road area and the motion speed of the target to be tracked at the moment of the loss of tracking comprises: determining a moving distance of the movable platform (Knowledge about the type would give an approximate size of the object and make it possible to compute a roughly estimated distance to the target. Page 35 last paragraph) based on the motion speed of the target to be tracked (What is known is the bearing to the object (i.e. the angle between the flight path of the UAV and the obstacle) and the rate of growth for the object, which makes it possible to tell how much the distance has changed between two points in time. Page 38 last paragraph); and adjusting the position of the movable platform (What complicates matters is that an object has to be classified as early as possible so there is enough time to take suitable measures (i.e., adjusting the position of the moveable platform). Page 34 1st paragraph) based on the moving distance and the driving direction corresponding to the target area (What is known is the bearing to the object (i.e. the angle between the flight path of the UAV and the obstacle) and the rate of growth for the object, which makes it possible to tell how much the distance has changed between two points in time. Page 38 last paragraph).
Li in the combination further teaches the motion speed of the target to be tracked at the moment of the loss of tracking (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph) and a duration since the loss of tracking(Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph. A person having ordinary skill in the art would recognize a duration since the loss of tracking is the time between the previous frame and the current frame.).
Wan in the combination further teaches the target road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Regarding claim 14, Driessen in the combination teaches the method according to claim 10, wherein the searching for the target to be tracked that is lost from tracking based on the target road area, the motion information of the target to be tracked at the moment of the loss of tracking, and the motion information of the at least one target objectcomprises: determining at least one candidate target object located within the target area from a plurality of the target objects based on motion information of the plurality of the target objects (Nearest Neighbour-matching is the simplest association algorithm. The observation that has the shortest distance to the predicted position is considered the correct measurement, as shown in Figure 5.2. Page 31 2nd paragraph.
PNG
media_image2.png
444
614
media_image2.png
Greyscale
); in response to the least one candidate target objectcomprising a plurality of candidate target objects (
PNG
media_image3.png
444
614
media_image3.png
Greyscale
), (When tracking in an environment that contains clutter and/or more than one object the measurements need to be associated with the correct tracks in some way. Page 30 1st paragraph); and determining the target to be tracked from the plurality of candidate target objects based on(Nearest Neighbour-matching is the simplest association algorithm. The observation that has the shortest distance to the predicted position is considered the correct measurement, as shown in Figure 5.2. Page 31 2nd paragraph).
Li in the combination further teaches determining a speed deviation between the motion speed of the target to be tracked at the moment of the loss of tracking and a motion speed of each of the plurality of candidate target objects (Step 4, Kalman prediction: Call the Kalman filter in the LabVIEW control and simulation module, using the best matching position information of the previous frame: target coordinates, speed and direction, predict the position of the current frame target, the current frame matching is completed. Page 4 last paragraph. It is common knowledge that matching of speeds involves determining a speed deviation between the two speeds.).
Wan in the combination further teaches the target road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Regarding claim 15, Driessen in the combination teaches the method according to claim 14, wherein the determining of the at least one candidate target object located within the target road area from the plurality of the target objects based on the motion information of the plurality of the target objects comprises: determining a distance between each of the plurality of the target objects and the target area based on position information of the plurality of the target objects (Nearest Neighbour-matching is the simplest association algorithm. The observation that has the shortest distance to the predicted position is considered the correct measurement, as shown in Figure 5.2. Page 31 2nd paragraph); and determining at least one target object whose distance is less than or equal to a preset distance as the at least one candidate target object located within the target area (Nearest Neighbour-matching is the simplest association algorithm. The observation that has the shortest distance to the predicted position is considered the correct measurement, as shown in Figure 5.2. Page 31 2nd paragraph).
Wan in the combination further teaches the target road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Regarding claim 16, Wan in the combination teaches the method according to claim 14, wherein the determining of the at least one candidate target object located within the target road area from the plurality of the target objects based on the motion information of the plurality of the target objects comprises: matching (S103 performs a target search in the target area of the electronic map corresponding to the actual geographical area according to the preset attribute and the first attribute information of the searched target to be searched, and obtains an operation tracing track corresponding to the target to be tracked. Page 8 3rd paragraph) road areas where the target objects are located in the map (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so a to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract) based on the motion information of the target objects (the first attribute information at least comprises first position information, first time information and first direction information. Abstract); and determining at least one target object whose road area matches road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so a to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Driessen in the combination teaches the plurality of the target objects (Nearest Neighbour-matching is the simplest association algorithm. The observation that has the shortest distance to the predicted position is considered the correct measurement, as shown in Figure 5.2. Page 31 2nd paragraph.
PNG
media_image2.png
444
614
media_image2.png
Greyscale
).
Regarding claim 19, Driessen in the combination teaches the method according to claim 1, further comprising: in a process of tracking the target to be tracked, correcting the motion information of the target to be tracked based on the target area (
PNG
media_image4.png
444
614
media_image4.png
Greyscale
); and tracking and photographing the target to be tracked based on corrected motion information (The target state is then updated as if it is the correct one, for example by using the standard Kalman filter. Page 31 2nd paragraph).
Wan in the combination further teaches the target road area (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
Regarding claim 20, Wan in the combination teaches the method according to claim 1, further comprising: displaying the map, the map comprising a plurality of road areas (Among them, the online map shows the target area (monitoring camera through the latitude and longitude information displayed in the online map of the designated location). Page 9 2nd paragraph); and marking the target to be tracked in real time in one of the plurality of road areas of the map (In connection with the first aspect, an embodiment of the present invention provides a third possible embodiment of the first aspect, wherein the first attribute information of the target to be tracked according to a preset search interval and found in the actual A target search is performed in a target area of the electronic map corresponding to the geographical area to obtain an operation tracking track corresponding to the target to be tracked. Page 3 12th paragraph).
Li in the combination further teaches based on real-time motion information of the target to be tracked (a search algorithm is brought in the tracking process to carry out estimation hypothesis on the position state of the target at the next moment, accordingly, tracking can be achieved in real time quickly and accurately. Abstract).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Wan (Chinese Patent Publication No.: CN106651916A), hereinafter Wan, in view of Li (Chinese Patent Publication No.: CN103903282A), hereinafter Li, further in view of Wang (Chinese Patent Publication No.: CN103149939A), hereinafter Wang.
Regarding claim 5, Wan teaches the method according to claim 4, wherein the obtaining, from the map, of the target map area corresponding to the position information in the motion information comprises: determining the target map area based on a preset area range (Where the map information may be a map in a preset area acquired by the map plug-in from the network for drawing a layout area, a person track display, etc. Page 8 5th paragraph) with a position point corresponding to the position information in the map (according to a preset search interval and the first attribute information of the searched target to be tracked, carrying out target searching in a target region of an electronic map corresponding to the actual geographic region so a to obtain an operation tracking trajectory corresponding to the target to be tracked. Abstract).
The combination of Wan and Li does not teach the following limitations as further recited, but Wang further teaches with a position point corresponding to the position information in the map as a center point (The dynamic target tracking and positioning method of the unmanned plane based on the vision can automatically realize the movement target detecting, image tracking and optical axis automatic deflecting without the full participation of the people, so that the dynamic target is always displayed at the center of an image-forming plane. Abstract).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wan and Li to incorporate the teachings of Wang to display a position point corresponding to the position information in the map as a center point so that the moving target will not run out of the camera's field of view resulting in tracking failure.
Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Wan (Chinese Patent Publication No.: CN106651916A), hereinafter Wan, in view of Li (Chinese Patent Publication No.: CN103903282A), hereinafter Li, further in view of Driessen (Object Tracking in a Computer Vision based Autonomous See-and-Avoid System for Unmanned Aerial Vehicles, Master’s Thesis in Computer Science at the School of Vehicle Engineering, Royal Institute of Technology year 2004), hereinafter Driessen, further in view of Ningniao (Chinese Patent Pub. No.: CN 110490902 A), hereinafter Ningniao.
Regarding claim 17, Wan teaches the method according to claim 14, wherein the determining of the target to be tracked from the plurality of candidate target objects based on speed deviation of each of the plurality of candidate target objects comprises: extracting image features of the target to be tracked from [[the]] first image comprising the target to be tracked (Extracting an image feature of the captured target image as a first image feature using a pre-trained depth model and extracting an image feature of the standard target image as a second image feature. Page 3 6th paragraph).
The combination of Wan, Li and Driessen does not teach the following limitations as further recited, but Ningniao further teaches determining the target to be tracked from the target objects based on the image features of the target to be tracked (the first apparent characteristic information of the first standard bounding box image input to the apparent feature extraction model performing characteristic extraction to obtain the tracking target. Page 2 last paragraph) and the speed deviation (S256. The the first motion, the second motion characteristic and a similarity judging the preset condition (i.e., a deviation), judging whether the detected target is a candidate tracking object. Page 11 1st paragraph. Optionally, the motion characteristics may include the position, speed, and the like. Page 11 3rd paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wan, Li and Driessen to incorporate the teachings of Ningniao to determine the target to be tracked from the plurality of candidate target objects based on the image features of the target to be tracked and the speed deviation in order to more accurately determine the tracking target thus improve the reliability of the object tracking.
Driessen in the combination further teaches determining the target to be tracked from the plurality of candidate target objects (Nearest Neighbour-matching is the simplest association algorithm. The observation that has the shortest distance to the predicted position is considered the correct measurement, as shown in Figure 5.2. Page 31 2nd paragraph.
PNG
media_image2.png
444
614
media_image2.png
Greyscale
).
Regarding claim 18, Wan in the combination teaches the method according to claim 17, wherein the determining of the target to be tracked from the plurality of candidate target objects based on the image features of the target to be tracked and the speed deviation comprises: determining at least one matching candidate target object that matches the target to be tracked from the target objects (Matching the captured target image according to a standard target image; And the captured object corresponding to the captured target image, which is successful, is determined as the target to be tracked. Page 3 3rd paragraph) based on the image features of the target to be tracked (Extracting an image feature of the captured target image as a first image feature using a pre-trained depth model and extracting an image feature of the standard target image as a second image feature. Page 3 6th paragraph. Comparing the first image feature and the second image feature. Page 3 7th paragraph).
Ningniao in the combination further teaches determining the target to be tracked from the at least one matching candidate target object based on the speed deviation of the at least one matching candidate target object (S256. The the first motion, the second motion characteristic and a similarity judging the preset condition (i.e., a deviation), judging whether the detected target is a candidate tracking object. Page 11 1st paragraph. Optionally, the motion characteristics may include the position, speed, and the like. Page 11 3rd paragraph).
Driessen in the combination further teaches determining the target to be tracked from the plurality of candidate target objects (Nearest Neighbour-matching is the simplest association algorithm. The observation that has the shortest distance to the predicted position is considered the correct measurement, as shown in Figure 5.2. Page 31 2nd paragraph.
PNG
media_image2.png
444
614
media_image2.png
Greyscale
).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHAO whose telephone number is (703)756-1922. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VU LE can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEI ZHAO/Examiner, Art Unit 2668
/VU LE/Supervisory Patent Examiner, Art Unit 2668