DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shin et al. US 12,125,277.
In regarding to claim 1 Shin teaches:
1. A target detection method, comprising: acquiring historical sensor data corresponding to a plurality of historical frames, wherein the plurality of historical frames precede a current frame;
(28) Image source 104 may be or may include one or sensors that are configured to generate data, such as visual data, audio data, etc., associated with an environment. The sensors can include an image sensor (e.g., a camera), a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, sound navigation and ranging (SONAR) sensor, an ultrasonic sensor, a microphone, and other sensor types. In some embodiments, the data collected and/or generated by the sensors may represent a perception of the environment by the sensors. It should be noted that although some embodiments of the present disclosure are directed to image data (e.g., an image) generated by one or more sensors of image source 104, embodiments of the present disclosure may be applied to any type of data generated by one or more sensors of image source 104 (e.g., LIDAR data, RADAR data, SONAR data, ultrasonic data, audio data, etc.).
33) Server machine 140 may include an object detection engine 141 configured to detect one or more objects included in images depicting an environment, such as images generated by image source 104. In some embodiments, object detection engine 141 may provide an image depicting an environment as input to a trained object detection model. The object detection model may be trained using historical data (e.g., historical images, historical object data, etc.) from one or more datasets to detect an object (referred to here as a detected object) included in a given input image depicting an environment, and estimate a region of the given input image that includes the detected object (referred to herein as a region of interest). In some embodiments, one or more outputs of the object detection model can indicate object data associated with the detected object. The object data may indicate a region of interest of a given input image that includes the detected object. For example, the object data can include a bounding box or another bounding shape (e.g., a spheroid, an ellipsoid, a cylindrical shape, etc.) that corresponds to the region of interest of the given input image. In some embodiments, the object data can include other data associated with the detected object, such as an object class corresponding to the detected object, mask data associated with the detected object (e.g., a two-dimensional (2D) bit array that indicates pixels (or groups of pixels) that corresponds to the detected object), and so forth.
Shin, col. 5 lines 48-64, col. 7 lines 9-34 and Figs. 2, 5A-5D, emphasis added.
generating at least one reference marker corresponding to at least one target object in the current frame according to the historical sensor data;
(45) A correlation filter may be trained to produce or identify a peak correlation response at a region of an image that corresponds to a reference coordinate (e.g., a center) of an object depicted in the image. Object localization module 210 may obtain an image 202 (i.e., from image source 104 or via data store 250) and apply the correlation filter to image 202 to obtain one or more outputs. The one or more outputs of the correlation filter can indicate one or more peak locations of a correlation response for image 202 (referred to herein simply as a correlation response). The locations of one or more correlation responses may correspond to regions of image 202 that depict an object in the environment and, in some embodiments, the peak location of the correlation response may correspond to the reference coordinate (e.g., the center) of the depicted object. Object localization module 210 may identify the regions of image 202 that are associated with a respective correlation response as regions of image 202 that depict a respective object (referred to herein as a correlation response region). In some embodiments, similarity component 212 may extract features from a correlation response region and assign a similarity metric value to the respective object depicted in the correlation response region and existing targets tracked by object tracking engine 151, as described above and in further detail below.
Shin, col. 11 lines 23-47 and Figs. 2, 5A-5D emphasis added.
and generating a target detection result in the current frame according to the reference marker.
(50) In some embodiments, data association module 214 may provide an indication of each unmatched bounding box region and unmatched estimated target location to target manager module 216, in some embodiments. Target manager module 316 may be configured to instantiate and/or terminate each object tracker 218 of object tracking engine 151. As indicated above, an object tracker refers to a logical component that is configured to track a state of a target included in a set of images (e.g., a sequence of video frames) depicting an environment. In response to receiving the indication of the unmatched bounding box regions and/or unmatched estimated target locations, target manager module 216 may determine whether to instantiate one or more new object trackers 218 (i.e., to create a new target) or terminate an instantiated object tracker 218 for an existing target (e.g., in accordance with a target termination policy). In an illustrative example, an unmatched bounding box region may indicate to the target manager module 216 that a new object has been detected in the surveilled environment. Accordingly, target manager module 216 may instantiate a new object tracker 218 to track the state of the detected object in image 202 and subsequent images (e.g., video frames) generated by image source 104. In some embodiments, target manager module 216 may instantiate a new object tracker 218 by assigning the target a target identifier (ID) and storing the target ID at data store 250 as target ID 252. In another illustrative example, an unmatched estimated target location may indicate to target manager module 216 that a target is no longer present in the environment surveilled by image source 104. In response to determining that the target satisfies one or more conditions of a target termination policy, target manager module 216 may terminate an object tracker 218 that was instantiated to track the state of the target. In some embodiments, target manager module 216 may terminate the object tracker 218 by removing the target ID 252 for the terminated target from data store 250 and/or recycling the target ID 252 of the terminated target to be used for a new target.
Shin, col. 13 lines 4-41 and Figs. 2, 5A-5D emphasis added.
In regarding to claim 2 Shin teaches:
2. The target detection method according to claim 1, wherein the method further comprises: acquiring current sensor data corresponding to the current frame;
Shin, col. 13 lines 4-41 and Figs. 2, 5A-5D
and generating at least one initial marker corresponding to the at least one target object in the current frame according to the current sensor data;
Shin, col. 13 lines 4-41 and Figs. 2, 5A-5D
wherein the generating the target detection result in the current frame according to the reference marker comprises: generating the target detection result in the current frame according to each initial marker and each reference marker.
Shin, col. 13 lines 4-41 and Figs. 2, 5A-5D
In regarding to claim 3 Shin teaches:
3. The target detection method according to claim 2, wherein the current sensor data comprises current point cloud data;
Shin, col. 3 lines 23-49 and Figs. 2, 5A-5D
wherein the generating at least one initial marker corresponding to the at least one target object in the current frame according to the current sensor data comprises: clustering the current point cloud data to detect at least one first target object;
Shin, col. 3 lines 23-49, col. 13 line 53 to col. 14 line 15 and Figs. 2, 5A-5D
and generating the initial marker corresponding to each first target object.
Shin, col. 3 lines 23-49, col. 13 line 53 to col. 14 line 15 and Figs. 2, 5A-5D
In regarding to claim 4 Shin teaches:
4. The target detection method according to claim 3, wherein the generating the initial marker corresponding to each first target object comprises: inputting the current frame including each first target object to an object detection model;
Shin, col. 3 lines 23-49, col. 13 line 53 to col. 14 line 15 and Figs. 2, 5A-5D
and obtaining an initial detection result output by the object detection model, wherein the initial detection result comprises each initial marker in the current frame, and pose information of each initial marker.
Shin, col. 3 lines 23-49, col. 13 line 53 to col. 14 line 15 and Figs. 2, 5A-5D
In regarding to claim 5 Shin teaches:
5. The target detection method according to claim 2, wherein the generating the target detection result in the current frame according to each initial marker and each reference marker comprises: determining a current marker from each initial marker and each reference marker, and determining at least one comparison marker corresponding to the current marker;
Shin, col. 3 lines 23-49, col. 13 line 53 to col. 14 line 15 and Figs. 2, 5A-5D
determining a target marker corresponding to the current marker according to the current marker and each comparison marker;
Shin, col. 3 lines 23-49, col. 13 line 53 to col. 14 line 15 and Figs. 2, 5A-5D
and removing the current marker and each comparison marker from each initial marker and each reference marker, and continuing the operation of determining the current marker from each initial marker and each reference marker, and determining at least one comparison marker corresponding to the current marker.
Shin, col. 13 lines 4-41 and Figs. 2, 5A-5D
In regarding to claim 6 Shin teaches:
6. The target detection method according to claim 5, wherein the determining the current marker from each initial marker and each reference marker, and determining at least one comparison marker corresponding to the current marker comprises: generating a set of markers to be processed according to each initial marker and each reference marker;
Shin, col. 13 line 4 to col. 14 line 15 and Figs. 2, 5A-5D
determining the current marker from the set of markers to be processed; calculating an intersection over union between the current marker and other markers in the set of markers to be processed; and taking a marker to be processed having an intersection over union greater than or equal to a first preset threshold as the comparison marker corresponding to the current marker.
Shin, col. 13 line 4 to col. 14 line 15 and Figs. 2, 5A-5D
In regarding to claim 7 Shin teaches:
7. The target detection method according to claim 6, wherein the method further comprises: removing a marker to be processed which has an intersection over union less than the first preset threshold and is greater than or equal to a second preset threshold from the set of markers to be processed, wherein the second preset threshold is less than the first preset threshold.
Shin, col. 12 lines 19-47 and Figs. 2, 5A-5D
In regarding to claim 8 Shin teaches:
8. The target detection method according to claim 6, wherein the marker to be processed has a confidence coefficient; wherein the determining the current marker from the set of markers to be processed comprises: determining the current marker according to the confidence coefficient of each marker to be processed.
Shin, col. 12 lines 19-47 and Figs. 2, 5A-5D
In regarding to claim 9 Shin teaches:
9. The target detection method according to claim 6, wherein the removing the current marker and each comparison marker from each initial marker and each reference marker comprises: removing the current marker and each comparison marker from the set of markers to be processed.
Shin, col. 13 line 4 to col. 14 line 15 and Figs. 2, 5A-5D
In regarding to claim 10 Shin teaches:
10. The target detection method according to claim 5, wherein the determining the target marker corresponding to the current marker according to the current marker and each comparison marker comprises: acquiring pose information of the current marker, and acquiring pose information and a weight of each comparison marker;
Shin, col. 13 line 4 to col. 14 line 15 and Figs. 2, 5A-5D
determining target marker pose information according to the pose information of the current marker, and the pose information and the weight of each comparison marker; and determining the target marker corresponding to the current marker according to the target marker pose information.
Shin, col. 13 line 4 to col. 14 line 15 and Figs. 2, 5A-5D
In regarding to claim 11 Shin teaches:
11. The target detection method according to claim 1, wherein the historical sensor data comprises historical point cloud data; wherein the generating at least one reference marker corresponding to at least one target object in the current frame according to the historical sensor data comprises: for each historical frame, clustering the historical point cloud data in the historical frame to obtain at least one second target object; predicting reference pose information of each second target object in the current frame; and determining the reference marker corresponding to each second target object in the current frame according to each reference pose information.
Shin, col. 13 line 4 to col. 14 line 15 and Figs. 2, 5A-5D
In regarding to claim 12 Shin teaches:
12. The target detection method according to claim 11, wherein the predicting reference pose information of each second target object in the current frame comprises: for each second target object, acquiring initial pose information of the second target object in the historical frame; and inputting the initial pose information to a motion prediction model, to obtain the reference pose information of the second target object in the current frame.
Shin, col. 13 line 4 to col. 14 line 15 and Figs. 2, 5A-5D
In regarding to claim 13 Shin teaches:
13. The target detection method according to claim 12, wherein the method further comprises: determining attribute information of the second target object; and determining a motion prediction model corresponding to the second target object according to the attribute information.
Shin, col. 14 lines 16-46 and Figs. 2, 5A-5D
In regarding to claim 14 Shin teaches:
14. The target detection method according to claim 12, wherein the inputting the initial pose information to a motion prediction model comprises: acquiring motion parameter information of the second target object; and inputting the initial pose information and the motion parameter information into the motion prediction model.
Shin, col. 14 lines 16-46, col. 21 lines 33-57 and Figs. 2, 5A-5D
Claims 15-17 list all similar elements of claims 1-3, but in device form rather than method form. Therefore, the supporting rationale of the rejection to claims 1-3 applies equally as well to claims 15-17.
Claims 18-20 list all similar elements of claims 1-3, but in non-transitory computer-readable medium form rather than method form. Therefore, the supporting rationale of the rejection to claims 1-3 applies equally as well to claims 18-20.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Liu US 11,187,793 B1 see at least col. 22 line 7 to col. 23 line 45.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T TEKLE/Primary Examiner, Art Unit 2481