Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. US20210333406A1, “Lee”, in view of Peake et al., US20200074266A1 “Peake”
Regarding claim 1, LEE discloses an apparatus (LEEabstract) comprising: a light detection (LEEclaim 1) and ranging (LiDAR) data collector (LEE¶ 46)
It is noted that LEE is silent about Lidar and Data collectors installed on a vehicle configured to be driven in a real environment to collect LiDAR data; a database configured to store the collected LiDAR data; a virtual environment generation engine configured to generate a virtual environment; and a LiDAR data generator configured to generate virtual LiDAR data corresponding to a movement of a virtual vehicle in the generated virtual environment as claimed.
However PEAKE discloses Lidar and Data collectors (PEAKE, ¶ 39) installed on a vehicle configured to be driven in a real environment to collect LiDAR data (PEAKE, ¶ [0040], In still further implementations, the virtual environment may be at least partially generated based on geo-spatial data. Such geo-spatial data may be sourced from predefined or existing images or other geo-spatial data (e.g., height maps or geo-spatial semantic data such as road versus terrain versus building data) as retrieved from remote sources (e.g., Mapbox images, Google Maps images, etc.). For example, the geo-spatial data may be used as a starting point to construct detailed representations of roads, lanes for the roads, and/or other objects or surfaces within the virtual environment. If previously collected image or depth data is available for a particular region of the virtual environment, then the system also can use real-world lidar data, and/or use techniques such as SLAM or photogrammetry to construct the virtual environment to provide additional real-world detail not specified by the m); a database configured to store the collected LiDAR data (PEAKE, ¶ 49, i.e., … In some embodiments, the memory(s) may store information or other data as described herein in a database (e.g., a relational database, such as Orcale, DB2, MySQL, or a NoSQL based database, such as MongoDB). The data stored in memory 152 may include all or part of any of the data or information described herein, including, for example, the photo-realistic scenes, the depth-map-realistic scenes, the environment-object data, feature training dataset(s), or other information or scenes as described herein.); a virtual environment generation engine configured to generate a virtual environment (PEAKE, ¶ 7); and a LiDAR data generator configured to generate virtual LiDAR data corresponding to a movement of a virtual vehicle in the generated virtual environment (PEAKE, ¶ 11).
Both LEE and PEAKE teach systems with Lidar, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the LEE disclosure, database associating with a virtual reality map, as taught by PEAKE. Such inclusion would have increased the usefulness of the system by generating feature training datasets for use in real-world autonomous driving applications, and would have been consistent with the rationale of combining prior art elements according to known methods to yield predictable results to show a prima facie case of obviousness (MPEP 2143(I)(A)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).
Regarding claim 2, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 1, wherein the LiDAR data collector comprises a laser light emitter and a laser light receiver (PEAKE, ¶ 39), and the collected LiDAR data comprises intensity information of laser light that is emitted from the laser light emitter and reflected off from an object (LEE¶ 51).
Regarding claim 3, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 2, wherein the intensity information is stored in association with corresponding coordinate information of a point where the laser light is reflected (PEAKE, ¶ 72)
Regarding claim 4, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 1, further comprising a point cloud map generator configured to generate a point cloud map based on the collected LiDAR data (PEAKE, ¶ 50).
Regarding claim 5, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 4, wherein the point cloud map comprises the collected LiDAR data that is stored in association with corresponding coordinates on the virtual environment (PEAKE, ¶ 104)
Regarding claim 6, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 5, wherein a mesh converter configured to convert the point cloud map into a three-dimensional (3D) mesh map (see point cloud citation above)
Regarding claim 7, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 6, wherein the LiDAR data generator is configured to: generate virtual laser light emitted to a predetermined mesh of the converted 3D mesh map; calculate virtual intensity of laser light associated with reflection of the emitted virtual laser light; and generate the virtual LiDAR data (PEAKE, ¶ 83)
Regarding claim 8, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 7, wherein the virtual intensity is calculated based on intensity information associated with a plurality of coordinates forming the predetermined mesh (PEAKE, see the mesh citation above).
Regarding claim 8, LEE/PEAKE, for the same motivation of combination, further discloses the apparatus according to claim 8, wherein the virtual intensity is calculated as an average of the intensity information associated with the plurality of coordinates (PEAKE, see intensity citation above)
Regarding claim 10, LEE/PEAKE, for the same motivation of combination, discloses a method performed by an apparatus, the method comprising: receiving, from a light detection and ranging (LiDAR) data collector installed on a vehicle configured to be driven in a real environment, collected LiDAR data; storing the collected LiDAR data in a database; generating, by using a virtual environment generation engine, a virtual environment; and generating virtual LiDAR data corresponding to a movement of a virtual vehicle on the generated virtual environment (see rejection of claim 1)
Regarding claim 11, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 10, wherein the collected LiDAR data comprises intensity information of laser light that is emitted from a laser light emitter and reflected off from an object (see Lidar citation above)
Regarding claim 12, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 11, wherein the intensity information is stored in association with corresponding coordinate information of a point where the laser light is reflected (see the point citation above)
Regarding claim 13, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 10, further comprising: generating a point cloud map based on the collected LiDAR data (see point cloud citation above)
Regarding claim 14, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 13, wherein the point cloud map comprises the collected LiDAR data that is stored in association with corresponding coordinates on the virtual environment (see virtual citation above)
Regarding claim 15, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 14, further comprising: converting the point cloud map into a three-dimensional (3D) mesh map (see mesh citation above).
Regarding claim 16, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 15, wherein the generating the virtual LiDAR data comprises: generating virtual laser light emitted to a predetermined mesh of the converted 3D mesh map; calculating virtual intensity of laser light associated with reflection of the emitted virtual laser light; and generating the virtual LiDAR data (see virtual lidar citation above)
Regarding claim 17, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 16, wherein the virtual intensity is calculated based on intensity information associated with a plurality of coordinates forming the predetermined mesh (see mesh citation above)
Regarding claim 18, LEE/PEAKE, for the same motivation of combination, further discloses the method according to claim 17, wherein the virtual intensity is calculated as an average of the intensity information associated with the plurality of coordinates (see virtual citation above)
Regarding claim 19, LEE/PEAKE, for the same motivation of combination, discloses an apparatus comprising: a receiver configured to receive light detection and ranging (LiDAR) data from a vehicle configured to be driven in a real environment to collect the LiDAR data; a database configured to store the LiDAR data; a virtual environment generation engine configured to generate a virtual environment; and a LiDAR data generator configured to generate virtual LiDAR data corresponding to a movement of a virtual vehicle in the generated virtual environment (see rejection of claim 1).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 11579252 B2 Sensor-cooling apparatus
US 11331805 B2 Motion restriction system and method
US 11327503 B2 Surveillance prevention by mobile robot
US 11327483 B2 Image capture devices for autonomous mobile robots and related systems and methods
US 11298835 B2 Robot
US 11274929 B1 Method for constructing a map while performing work
US 20210333406 A1 LIDAR APPARATUS FOR VEHICLE
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK F HUANG whose telephone number is (571)272-0701. The examiner can normally be reached Monday-Friday, 8:30 am - 6:00 pm (Eastern Time), Federal Alternative First Friday Off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571)272-2988.. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK F HUANG/Primary Examiner, Art Unit 2485