DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-30 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claim(s) 1-4, 10-14 and 21-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fagg et al (US20200250837) in view of Zhang et al (US20180188027) and further in view of Zeng (US20190156507).
Regarding claims 1, 12 and 23, Fagg teaches an apparatus for shape estimation, comprising: at least one memory; and at least one processor coupled to the at least one memory and configured to:
detect first features of an object in a first frame of an environment, the environment including the object;
(Fagg, “The method includes identifying a set of corresponding image features from the image data, the set of corresponding image features including a first feature in the first image”, [abstract]; “The image frame can be processed to extract, for example, information describing the location of objects within the scene of the surrounding environment. The image data can also include a capture time of the image frame... the sensor system can collect the image data as a plurality of consecutive image frames”, [0023]; feature detection in a first frame, and feature extraction, using various techniques)
determine a first set of three-dimensional (3D) points for the first frame based on the detected first features and first distance information obtained for the object;
(Fagg, “Point resolution unit 430 can be configured to determine point resolution data 504 based on state data 402 and matched-feature data 502. Point resolution unit 504 can include a range value associated with one or more pixels that correspond to an object”, [0064]; generating a set of 3D points from features, including image/range fusion; Zhang, more explicitly discloses: “The vehicle computing system determines 915, for each of the extracted features of the frame N, a 3D location of the feature. The 3D location of a feature indicates a position of the feature relative to the vehicle sensors on the vehicle. In some embodiments, the vehicle computing system determines the 3D location for each feature by triangulating the first and second feature points corresponding to the feature based upon their respective locations within the first and second images”, [0083])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Zhang into the system or method of Fagg in order to enable integrating Zhang’s robust feature triangulation and mapping methods for enhancement in accuracy and reliability of 3D object point sets obtained from sensor data in Fagg, thereby improving the overall shape estimation and feature-based object analysis capabilities in autonomous vehicle environments. The combination of Fagg and Zhang also teaches other enhanced capabilities.
The combination of Fagg and Zhang further teaches:
detect second features of the object in a second frame of the environment;
(Fagg, “The method includes identifying a set of corresponding image features from the image data, the set of corresponding image features including a first feature in the first image having a correspondence with a second feature in the second image”, [abstract]; Zhang, “determines 920 feature correspondences between the extracted features of the frame N with extracted features of another image frame corresponding to a different point in time (e.g., a previous frame N-1 corresponding to a previous point in time N-1)... identifies a first set of features of the frame N, and identifies a corresponding second set of features of the frame N-1”, [0084]; detection of features in a second frame)
determine a second set of 3D points for the second frame based on the detected second features and second distance information obtained for the object;
(Zhang, “The vehicle computing system determines 915, for each of the extracted features of the frame N, a 3D location of the feature. The 3D location of a feature indicates a position of the feature relative to the vehicle sensors on the vehicle. In some embodiments, the vehicle computing system determines the 3D location for each feature by triangulating the first and second feature points corresponding to the feature based upon their respective locations within the first and second images”, [0083]; determine 3D points of frame N, N = 1, 2, ...)
combine the first set of 3D points and the second set of 3D points to generate a combined set of 3D points; and
(Fagg, “identifying a set of corresponding image features from the image data, the set of corresponding image features including a first feature in the first image having a correspondence with a second feature in the second image. The operations include determining a respective distance for each of the first feature in the first image and the second feature in the second image based at least in part on the range data”, [0007]; matching features between frames and combining the corresponding point sets as a unified data structure for tracking and further analysis)
The combination of Fagg and Zhang does not expressly disclose but Zeng teaches:
estimate a shape of the object based on the combined set of 3D points.
(Zeng, “It is assumed that N (greater than 2) frames of object point cloud data exist. As shown in FIG. 4D, merging object point cloud data of a first frame and a second frame is used as an example. The object point cloud data in the first and second frame is analyzed to detect a set of feature points (a feature point set) for each respective frame. The feature points may be described by using feature vectors. Feature points in a feature point set 1 and feature points in a feature point set 2 are matched to obtain a same feature point set, denoted as feature points 1. A shape s1 of the feature points 1 in the first frame of the point cloud data and a shape s2 of the feature points 1 in the second frame of the point cloud data have the following relationship: s1=s2*f, where f represents an adjustment of a size and rotation in an angle”, [0081]; the feature point set 1 and the feature point set 2 (they both are 3D points from two different frames) are merged to obtain a merged “feature points 1” (3D points) have two shapes: shape s1 in the first frame and shape s2 in the second frame are related with s1=s2*f, where s1, s2 and f are in general matrices; the combined shape of the merged “feature points 1” are obvious: s(1,2) = s1+s2 =(I+f)*s2, where I is a unitary matrix)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Zeng into the system or method of Fagg in order to obtain a combined shape of a merged point cloud from two different frames for object tracking. The combination of Fagg, Zhang and Zeng also teaches other enhanced capabilities.
Regarding claims 2, 13 and 24, the combination of Fagg, Zhang and Zeng teaches its/their respective base claim(s).
The combination further teaches the apparatus of claim 1, wherein 3D points for the object are combined for a predetermined number of frames to perform shape estimation of the object.
(Zeng, see comments on claim 1)
Regarding claims 3, 14 and 25, the combination of Fagg, Zhang and Zeng teaches its/their respective base claim(s).
The combination further teaches the apparatus of claim 1, wherein 3D points for the object are combined for a predetermined number of 3D points to perform shape estimation of the object.
(Zeng, see comments on claim 1; merge point clouds from N frames)
Regarding claims 4, 15 and 26, the combination of Fagg, Zhang and Zeng teaches its/their respective base claim(s).
The combination further teaches the apparatus of claim 1, wherein the at least one processor is further configured to determine a velocity of the object based on the first set of 3D points and the second set of 3D points.
(Fagg, Figs. 1 and 5-6; “determining a velocity associated with a portion of a scene represented by the set of corresponding image features based at least in part on the respective distance for the first feature and the second feature”, [abstract]; calculating object velocity from series of matched 3D point sets over time)
Regarding claims 10 and 21, the combination of Fagg, Zhang and Zeng teaches its/their respective base claim(s).
The combination further teaches the apparatus of claim 1, wherein the at least one processor is further configured to:
determine a set of outlier 3D points of the first set of 3D points and the second set of 3D points based on a distance between an outlier 3D point and a neighboring point; and
remove the set of outlier 3D points.
(Zeng, “screen out point cloud data corresponding to three-dimensional points that do not conform to the fitted three-dimensional curve, to further reduce noise in the extracted point cloud data of the road facility”, [0167]; “curve fitting is performed on the extracted guardrail point cloud data by using a three-dimensional curve fitting method, to obtain final three-dimensional curve data of the road guardrail, such as shown in FIG. 5E”, [0141]; identifying/removing outlier 3D points based on spatial relationships)
Regarding claims 11 and 22, the combination of Fagg, Zhang and Zeng teaches its/their respective base claim(s).
The combination further teaches the apparatus of claim 1, wherein the first distance comprises a first distance from the apparatus to the object and the second distance comprises a second distance from the apparatus to the object.
(Fagg, Figs. 6; “The operations include determining a respective distance for each of the first feature in the first image and the second feature in the second image based at least in part on the range data”, [0007]; “the range data can include a range value (e.g., distance) associated with one or more of the plurality of points”, [0024]; these distances are from the sensor/vehicle to the object in each frame)
Allowable Subject Matter
Claim(s) 5-9, 16-20 and 27-30 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening Claim(s).
The following is a statement of reasons for the indication of allowable subject matter:
Claim(s) 5, 16 and 27 recite(s) limitation(s) related to estimating an object shape based on velocity of the object. There are no explicit teachings to the above limitation(s) found in the prior art cited in this office action and from the prior art search.
Claim(s) 6-9, 17-20 and 27-30 depend on claims 5, 16 and 27, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIANXUN YANG/
Primary Examiner, Art Unit 2662 1/14/2026