DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/11/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 1, 3, 5, 7, 9, and 11 are objected to for the following informalities:
In Claim 1, line 12, add the words “and wherein” before the words “the recognized objects,” so that the limitation reads: “and wherein the recognized objects…”
In Claim 3, line 25, the phrase “generate a target centroid coordinates” should be “generate
In Claim 5, lines 13-14, the phrase “of external environment” should be “of an external environment.”
In Claim 7, pg. 19, line 19, delete the word “steps.”
In Claim 7, pg. 20, line 4, add the words “and wherein” before the words “the recognized objects,” so that the limitation reads: “and wherein the recognized objects…”
In Claim 7, pg. 20, lines 4-5, the phrase “is identified as” should be “are identified as.”
In Claim 9, lines 16-17, the phrase “generate a target centroid coordinates” should be “generate
In Claim 11, lines 5-6, the phrase “of external environment” should be “of an external environment.”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C.
102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the
statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a
new ground of rejection if the prior art relied upon, and the rationale supporting the rejection,
would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness
rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the
claimed invention is not identically disclosed as set forth in section 102, if the
differences between the claimed invention and the prior art are such that the
claimed invention as a whole would have been obvious before the effective filing
date of the claimed invention to a person having ordinary skill in the art to which
the claimed invention pertains. Patentability shall not be negated by the manner in
which the invention was made.
Claims 1-3, 5-9, and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (Wang et al., “High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar,” 2020) in view of Lin (Lin et al., “Deep Learning Derived Object Detection and Tracking Technology Based on Sensor Fusion of Millimeter-Wave Radar/Video and Its Application on Embedded Systems,” March 2, 2023).
Regarding Claim 1, Wang teaches: A sensor fusion and object tracking system, comprising
a first fusion module configured to perform a first fusion process on a 2D driving image and 3D point cloud information to obtain first fusion information containing a plurality of recognized objects ([p. 1621]: “our method instead maps RGB values into a sequence of point clouds to aggregate 7D frustum (XY ZRGBT) from camera and LiDAR”; [p. 1624]: “basing detection on a sequence of camera images and 3D point clouds”; “estimate 3D bounding boxes in 7D point clouds”); and
a second fusion module, being in signal communication with the first fusion module, the second fusion module being configured to perform a second fusion process on the first fusion information and 2D radar information to obtain second fusion information containing the recognized objects ([p. 1621]: “a radar feature map within the 2D ROI is extracted and concatenated to the colored point cloud features”; [p. 1624]: “a radar cluster is generated … which is able to deliver target points with 2D coordinates”; [p. 1625]: “8D radar PointNet”; “four box parameters”),
… generate a region of interest (ROI) ([p. 1621]: “2D ROI”); the recognized objects inside the region of interest are used as a plurality of target objects … ([p. 1624]: “3D bounding boxes”; [p. 1625]: “bounding box estimation”).
Wang does not explicitly teach: using second fusion information to generate a ROI, or subsequent detection and tracking.
Lin teaches: wherein the second fusion information is used to generate a region of interest (ROI) (Lin [p. 6]: “early fusion process”; “determine the ROIs”; [p. 12]: “Radar and Camera Data Fusion”; [p. 13]: “Dynamic ROI”); the recognized objects inside the region of interest are used as a plurality of target objects of subsequent detection and tracking ([p. 3]: “This paper focuses on the early fusion of the mmWave radar and camera sensors for object detection and tracking”; [p. 14]: “track the bounding boxes generated by the object detection model”).
It would have been obvious to one of ordinary skill in the art to modify Wang and use the second fusion information to generate a ROI, and to perform subsequent detection and tracking of recognized objects, as taught by Lin. Wang already teaches generating ROIs and teaches that the outputs may be used for tracking. Generating ROIs from radar fusion data and tracking objects in the ROI, as taught by Lin, is beneficial for improving detection and tracking of objects (Lin [p. 20]).
Regarding Claim 7, Wang teaches: A sensor fusion and object tracking method, comprising steps:
receiving a 2D driving image and 3D point cloud information and performing a first fusion process on the 2D driving image and the 3D point cloud information by a first fusion module to obtain first fusion information containing a plurality of recognized objects ([p. 1621]: “our method instead maps RGB values into a sequence of point clouds to aggregate 7D frustum (XY ZRGBT) from camera and LiDAR”; [p. 1624]: “basing detection on a sequence of camera images and 3D point clouds”; “estimate 3D bounding boxes in 7D point clouds”); and
receiving 2D radar information and performing a second fusion process on the first fusion information and the 2D radar information by a second fusion module to obtain second fusion information containing the recognized objects ([p. 1621]: “a radar feature map within the 2D ROI is extracted and concatenated to the colored point cloud features”; [p. 1624]: “a radar cluster is generated … which is able to deliver target points with 2D coordinates”; [p. 1625]: “8D radar PointNet”; “four box parameters”),
… generate a region of interest ([p. 1621]: “2D ROI”), the recognized objects inside the region of interest is identified as a plurality of target objects … ([p. 1624]: “3D bounding boxes”; [p. 1625]: “bounding box estimation”).
Wang does not explicitly teach: using second fusion information to generate a ROI, or subsequent detection and tracking.
Lin teaches: wherein the second fusion information is used to generate a region of interest (Lin [p. 6]: “early fusion process”; “determine the ROIs”; [p. 12]: “Radar and Camera Data Fusion”; [p. 13]: “Dynamic ROI”), the recognized objects inside the region of interest is identified as a plurality of target objects for subsequent tracking ([p. 3]: “This paper focuses on the early fusion of the mmWave radar and camera sensors for object detection and tracking”; [p. 14]: “track the bounding boxes generated by the object detection model”).
The rationale to modify Wang with the teachings of Lin would persist from Claim 1
Regarding Claims 2 and 8, Wang teaches: the system further comprising
… generating the region of interest … ([p. 1621]: “2D ROI”), and identifying the recognized objects inside the region of interest as the target objects ([. 1621]: “a radar feature map within the 2D ROI is extracted and concatenated to the colored point cloud features”).
Wang does not explicitly teach: an object tracking module being in signal communication with the second fusion module, receiving the second fusion information, and generating the region of interest within a field of view (FOV) of the 2D radar information according to the second fusion information.
Lin teaches: an object tracking module being in signal communication with the second fusion module ([p. 13]: “Dynamic ROI”; [p. 14]: “Tracking”), receiving the second fusion information ([p. 14]: “We select the bounding boxes as the input of the trackers”), generating the region of interest within a field of view (FOV) of the 2D radar information according to the second fusion information ([p. 13]: “Dynamic ROI”; [p. 16]: “the mmWave radar used in this work only detects around 50 m in a given field”; Figure 20), and identifying the recognized objects inside the region of interest as the target objects ([p. 14]: “We select the bounding boxes as the input of the trackers”).
It would have been obvious to one of ordinary skill in the art to modify Wang and include an object tracking module that receives the second fusion information, generates a ROI within an FOV of the radar information, and identifies objects in the ROI, as taught by Lin. Using fusion information to generate a ROI within an FOV of a sensor and identifying objects in the ROI is considered ordinary and well-known in the art. Modifying Wang to generate a ROI and identify objects in the ROI as taught by Lin is beneficial for improving detection and tracking of objects (Lin [p. 20]).
Regarding Claims 3 and 9, Wang does not explicitly teach – but Lin teaches: wherein the object tracking module is further configured to:
perform a centroid tracking algorithm on the target objects inside the region of interest to generate a target centroid coordinates of each of the target objects (Lin [p. 5]: “DBSCAN”; “center point”; [p. 14]: “bounding box”);
perform a Kalman filter on the target centroid coordinates of each of the target objects to obtain observed information of each of the target objects and then calculates predicted information based on the observed information (Lin [p. 14]: “we adopt the Kalman filter to implement the tracking”); and
track the target objects according to the observed information and the predicted information (Lin [p. 14]: “Tracking”; “Kalman filter”).
It would have been obvious to one of ordinary skill in the art to modify Wang and generate centroid coordinates of each target object, and perform Kalman filter tracking of the target objects, as taught by Lin. Calculating centroid coordinates and using a Kalman filter for tracking are considered ordinary and well-known in the art, and both are beneficial for improving target tracking (Lin [p. 20]).
Regarding Claims 5 and 11, Wang teaches: wherein the first fusion information is space information of external environment ([p 1621]: “environment perception”).
Regarding Claims 6 and 12, Wang teaches: wherein the first fusion module is further configured to perform feature extraction using a neural network on the 2D driving image and the 3D point cloud information to obtain a plurality of characteristic points ([p. 1621]: “a high dimensional convolution operator captures local features from a point cloud enhanced with color and temporal information”; [p. 1625]: “four sets of abstraction (SA) layers and four feature propagation layers”).
Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (Wang et al., “High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar,” 2020) in view of Lin (Lin et al., “Deep Learning Derived Object Detection and Tracking Technology Based on Sensor Fusion of Millimeter-Wave Radar/Video and Its Application on Embedded Systems,” March 2, 2023), as applied to Claims 1 and 7 above, and further in view of Wu (US 2023/0324509).
Regarding Claims 4 and 10, Wang does not explicitly teach – but Wu teaches: wherein the second fusion module performs a high-pass filtering process and a low-pass filtering process to filter out noise of the second fusion information (Wu [0027]: “high-pass filter”; “low pass filter”).
It would have been obvious to one of ordinary skill in the art to modify Wang and perform high-pass and low-pass filtering, as taught by Wu. High-pass and low-pass filtering is considered ordinary and well-known in the art, and removing noise is a well-known benefit of both high-pass and low-pass filtering.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOAH Y. ZHU whose telephone number is (571)270-0170. The examiner can normally be reached Monday-Friday, 8AM-4PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William J. Kelleher can be reached on (571) 272-7753. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NOAH YI MIN ZHU/Examiner, Art Unit 3648
/William Kelleher/Supervisory Patent Examiner, Art Unit 3648