DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 3, 6-7, 10, 13, 16, and 19-20 are objected to because of the following informalities:
Regarding claims 3, 10, and 16, the phrase “one environmental mapping” in line 6 is a grammatical error.
Regarding claims 6, 13, and 19, the phrase “providing a the lidar feature” in line 5 is a grammatical error.
Regarding claims 7 and 20, the phrase “the outputted an enhanced” in line 2 is a grammatical error.
Regarding claims 7 and 20, the phrase “the outputted an enhanced” in line 3 is a grammatical error.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 7-8, 14, and 20 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Poepperl et al. (DE 102019101127), hereinafter Poepperl.
In re. claim 1, Poepperl teaches a method for ultrasonic sensor reading enhancement using a lidar point cloud, the method comprising: receiving an ultrasonic sensor temporal feature using an ultrasonic sensor (ultrasound signals to sensors (21-26)) (fig. 3) (pg. 5, 2nd para.); inputting the ultrasonic sensor temporal feature into an autoencoder system comprising instructions stored in a memory (medium with instructions stored thereon) (pg. 4, 3rd to last para.) and executed by a processor (5); wherein the autoencoder system is trained (second neural network (7)) (fig. 4) using a prior inputted ultrasonic sensor temporal feature (21-26) and a corresponding prior inputted lidar feature label received from a lidar system (4) (pg. 6, 1st para.); and using the trained autoencoder system, outputting an enhanced ultrasonic sensor environmental mapping (3D point cloud that no longer requires lidar sensor (4)) (pg. 6, 3rd para.).
In re. claims 7 and 20, Poepperl teaches at a vehicle control system, receiving the outputted an enhanced ultrasonic sensor environmental mapping and directing operation of a vehicle (1) based on the outputted an enhanced ultrasonic sensor environmental mapping (figs. 3-4).
In re. claim 8, Poepperl teaches a non-transitory computer-readable medium (comprising instructions stored in a memory and executed by a processor to carry out steps for ultrasonic sensor reading enhancement using a lidar point cloud (pg. 4, 3rd to last para.), the steps comprising: receiving an ultrasonic sensor temporal feature using an ultrasonic sensor (ultrasound signals to sensors (21-26)) (fig. 3) (pg. 5, 2nd para.); inputting the ultrasonic sensor temporal feature into an autoencoder system comprising instructions stored in a memory (medium with instructions stored thereon) (pg. 4, 3rd to last para.) and executed by a processor (5); wherein the autoencoder system is trained (second neural network (7)) (fig. 4) using a prior inputted ultrasonic sensor temporal feature (21-26) and a corresponding prior inputted lidar feature label received from a lidar system (4) (pg. 6, 1st para.); and using the trained autoencoder system, outputting an enhanced ultrasonic sensor environmental mapping (3D point cloud that no longer requires lidar sensor (4)) (pg. 6, 3rd para.).
In re. claim 14, Poepperl teaches a system for ultrasonic sensor reading enhancement using a lidar point cloud, the system comprising: an ultrasonic sensor operable for generating an ultrasonic sensor temporal feature ((21-26)) (fig. 3) (pg. 5, 2nd para.); and an autoencoder system comprising instructions stored in a memory (medium with instructions stored thereon) (pg. 4, 3rd to last para.) and executed by a processor (5), the autoencoder system operable for receiving the ultrasonic sensor temporal feature from the ultrasonic sensor and outputting an enhanced ultrasonic sensor environmental mapping (3D point cloud that no longer requires lidar sensor (4)) (pg. 6, 3rd para.); wherein the autoencoder system is trained using a prior inputted ultrasonic sensor temporal feature (fig. 3) and a corresponding prior inputted lidar feature (fig. 4) label generated by a lidar system (4) (pg. 6, 1st para.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 9, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Poepperl as applied to respective claims above, and further in view of Kakinami et al. (US 2009/0208109), hereinafter Kakinami and Kim (US 2020/0242820).
In re. claims 2, 9, and 15, Poepperl teaches the ultrasonic sensor temporal feature comprises an environmental map with relatively more noise and the enhanced ultrasonic sensor environmental mapping comprises an environmental map with relatively less noise (depth map of first neural network after being trained by the second neural network) (pg. 6, 3rd para.).
Poepperl fails to disclose the first map is a 1D environmental map, and the second map is a 2D environmental map.
Kakinami teaches the first ultrasonic map is a 1D environmental map (fig. 12) (para [0081]-[0082]).
Therefore, it would have been prima facie obvious to one having ordinary skill in the art at the time the invention was filed to have modified Poepperl to incorporate the teachings of Kakinami to have the first map as a 1D environmental map, for the purpose of utilizing the teachings of the invention with one dimensional ultrasonic sensors.
Kim teaches the second lidar map is a 2D environmental map (fig. 2) (para [0020]).
Therefore, it would have been prima facie obvious to one having ordinary skill in the art at the time the invention was filed to have modified Poepperl as modified by Kakinami to incorporate the teachings of and Kim to have the second map as a 2D environmental map, for the purpose of comparing the data from the one dimensional ultrasonic sensors.
Claims 3-4, 10-11, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Poepperl as applied to respective claims above, and further in view of Zamiska et al. (US 11,520,332), hereinafter Zamiska.
In re. claims 3, 10, and 16, Poepperl teaches the prior inputted ultrasonic sensor temporal feature is formed by performing ultrasonic sensor data feature extraction (data points from sensors (21-26), and, for each position, calculating a reflection point in an environment (of objects 2, 2’,2”), thereby providing one environmental mapping (depth map for neural network (6)) (pg. 5, 5th para.).
Poepperl fails to disclose using inertial measurement unit data across N frames and a kinematic bicycle model to generate an ego vehicle trajectory, and for each position in the ego vehicle trajectory, the reflection point in the environment is based on a yaw angle and each ultrasonic sensor reading.
Zamiska teaches using inertial measurement unit data across N frames (inertial data (702) taken during movement) (col. 33, ln. 13-17) and a kinematic bicycle model (fig. 2) to generate an ego vehicle trajectory (trajectory (124)), and for each position in the ego vehicle trajectory, the reflection point in the environment is based on a yaw angle and each ultrasonic sensor reading (distance and angle) (col. 18, ln. 60-63) (fig. 2).
Therefore, it would have been prima facie obvious to one having ordinary skill in the art at the time the invention was filed to have modified Poepperl to incorporate the teachings of Zamiska to have the recited method of environmental mapping, for the purpose of utilizing known environmental mapping techniques, reducing research costs.
In re. claims 4, 11, and 17, Poepperl as modified by Zamiska (see Poepperl) teach the data feature extraction further comprises, for a trajectory cut based on an ultrasonic sensor's field of view, using the environmental mapping from one ultrasonic sensor, as well as a same mapping from the lidar system (reference and actual points correlate in first and second neural networks) (pg. 6, 3rd para.).
Claims 5-6, 12-13, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Poepperl as applied to respective claims above, and further in view of Kim.
In re. claims 5, 12, and 18, Poepperl teaches the prior inputted lidar feature label is formed by performing lidar point cloud feature generation by filtering lidar points by a field of view of an ultrasonic sensor (reference and actual points correlate in first and second neural networks) (pg. 6, 3rd para.) (by field of view) (fig. 3).
Poepperl fails to disclose the points are filtered by height.
Kim teaches the lidar points are filtered by height (fig. 2).
Therefore, it would have been prima facie obvious to one having ordinary skill in the art at the time the invention was filed to have modified Poepperl to incorporate the teachings of Kim to have the lidar points filtered by height, for the predictable result of comparing the two readings using a two-dimensional comparison, reducing the processing power as compared to three-dimensional analysis.
In re. claims 6, 13, and 19, Poepperl as modified by Kim (see Kim) teach the method of claim 5, wherein the lidar point cloud feature generation further comprises finding closest lidar points to an ego vehicle by splitting the field of view of the ultrasonic sensor into angles (theta) centered at the ultrasonic sensor (fig. 3) and, within each angle, selecting a constant number of lidar points that are closest to the ego vehicle (point cloud) (fig. 2), wherein a third dimension of the selected points is discarded, thereby providing a the lidar feature with a total number of the selected points that matches the inputted ultrasonic sensor temporal feature (project) (fig. 2).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christopher D. Hutchens whose telephone number is (571)270-5535. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kimberly Berona can be reached at 571-272-6909. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.D.H./
Primary Examiner
Art Unit 3647
/Christopher D Hutchens/Primary Examiner, Art Unit 3647