Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to the Applicant’s arguments
The previous rejection is withdrawn. Applicant’s amendments are entered. Applicant’s remarks are also entered into the record. A new search was made necessitated by the applicant’s amendments.
A new reference was found. A new rejection is made herein.
Applicant’s arguments are now moot in view of the new rejection of the claims.
PNG
media_image1.png
780
1022
media_image1.png
Greyscale
Claim 1 is amended to recite and Zhu is silent but NEWMAN teaches “...wherein the localized position is determined based on a first difference between vectors representing the first transmit signal and the second transmit signal and a second difference between vectors representing the first predetermined position and the second predetermined position;...”. (see paragraph 90-98 and 100-104 where the horizontal lidar scans for different objects can show vectors and the difference of vectors to show the change in the position of the objects and this can provide a speed and velocity of the object using two or more lidar scans)
PNG
media_image2.png
568
476
media_image2.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of NEWMAN with the disclosure of ZHU since NEWMAN teaches that a first LIDAR device and a second LIDAR device can be determined to provide a map of the environment 202, 204 and 301. The vectors of the objects can also be shown. See paragraph 104. Then a linear and rotational velocities of the object can be determined in block 406 and the localization of the vehicle can be determined in block 412. The vehicle can be then localized in the point cloud using the scans. See claims 1-8.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™and in view of United States Patent Application Pub. No.: US20240045426A1 A1 to DITTY that is assigned to NVIDIA™ and that was filed in 2017 and in view of United States Patent Pub. Application No.: US 20160179093 A1 to Prokhorov that was filed in 2014 and in view of United States Patent Application Pub. No.: US20150331111A1 to Newman that was filed in 2012.
PNG
media_image3.png
820
598
media_image3.png
Greyscale
Zhu discloses “...1 . A light detection and ranging (LIDAR) sensor system for a vehicle,
comprising:
one or more scanning optics configured to output a first transmit signal and a second transmit signal; (see FIG. 4a where the lidar has a mirror and a spinning configuration and provides a first lidar beam 421a and a second 422a and a third 423a and a fourth beam 430 and see paragraph 38-41)
and one or more processors configured to: (see claim 1-12)
receive a first return signal from reflection of the first transmit signal by a first object; (see paragraph 32 and 38-41 where the return signal of the first through fourth beams can provide a range to the vehicle and the object surfaces)
receive a second return signal from reflection of the second transmit signal by a second object; (see paragraph 32 and 38-41 where the return signal of the first through fourth beams can provide a range to the vehicle and the object surfaces)
retrieve a first predetermined position of the first object and a second predetermined position of the second object; (see paragraph 32-42 where the range can be 10-200 meters to detect 1. And track 2. Movements of the pedestrians and bikes and objects and other vehicles in the road)
PNG
media_image4.png
850
1204
media_image4.png
Greyscale
determine a position of the vehicle based on the first return signal, the second return signal, (See paragraph 42-58 and FIG. 5b where the LIDAR beams can determine the vehicle position and the position of the truck and adjust the position of the vehicle in response to the position of the truck and to maintain the vehicle in the lane and correct a position of the vehicle ) the first predetermined position, and the second predetermined position; and provide (see paragraph 32-42 where the range can be 10-200 meters to detect 1. And track 2. Movements of the pedestrians and bikes and objects and other vehicles in the road)
a control signal to a control system of the vehicle for the control system to control operation of the vehicle based on the position. (See paragraph 42-58 and FIG. 5b where the LIDAR beams can determine the vehicle position and the position of the truck and adjust the position of the vehicle in response to the position of the truck and to maintain the vehicle in the lane and correct a position of the vehicle )”.
PNG
media_image5.png
914
1356
media_image5.png
Greyscale
Zhu is silent but Ditty teaches “...receive from a database a first predetermined position of the first object and a second predetermined position of the second object”. (see FIG. 42 where the cloud includes an HP MAP and a cloud lane graph and drive matching and tiling functions for calibrating the telemetry of the vehicle’s camera, lidar and radar and IMU with the cloud data in blocks 3022 and see paragraph 322-340 where the vehicle can interface with a supervisor computing system and can provide supervisory control and automatic braking of the vehicle and see FIG. 12 where the supervisor can detect a sign and then parse the sign as flashing lights indicating an ice condition and then provide those instructions to the autonomous vehicle)”.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of DITTY with the disclosure of ZHU with a reasonable expectation of success since DITTY teaches that an autonomous vehicle can include a map perception unit 3016 that can interface with a cloud for a calibration. Cloud mapping 3022 may receive inputs from a number of different contemporaneously and/or previously operating autonomous vehicles and other information sources, for example. The cloud mapping 3022 provides mapping outputs which are localized by localization 3026 based e.g., upon the particular location of the ego-vehicle, with the localized output used to help generate and/or update the world model 3002. The world model 3002 so developed and maintained in real-time is used for autonomous vehicle planning 3004, control 3006 and actuation 3008. Thus, a.route from an original location to a destination can be derived with a hazard and the server can provide assistance to the autonomous vehicle by sharing its database and can provide an automatic steering or braking to improve safety. See paragraph 489-500 of Ditty.
PNG
media_image6.png
720
500
media_image6.png
Greyscale
PNG
media_image7.png
820
720
media_image7.png
Greyscale
Zhu is silent but Prokorov teaches “..determine a localized position of the vehicle based on the first return signal, the second return signal and a first and the second predetermined position and provide a control signal to a control system for the vehicle for the control system to control the operation of the vehicle based on the position”. (see paragraph 15-19 where a localized position of the vehicle relative to the object can be made and a range from each of the position and orientation of the vehicle to the object and geographic features can be made using the return signal of the lidar and radar devices and see paragraph 5 and block 100 where a non-movable object can be detected and then a second vehicle can be detected and tracked as being within an obstructed region and then the vehicle can be controlled based on the 1. Immobile object 2. The obstructed region and 3. The tracked portion in a localized frame)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of PROKHOROV of TOYOTA MOTOR™ with the disclosure of ZHU since PROKHOROV teaches that a lidar can provide a localized coordinate system. In FIG. 3-4 a vehicle can provide a blocked obstructed area. This is an area where there is no line of sight. The vehicle can provide a tracking using a camera to track the vehicles behind the blocked obstruction area. This can provide a safe tracking of the objects using a lidar and a camera for a safer operation.
PNG
media_image1.png
780
1022
media_image1.png
Greyscale
Claim 1 is amended to recite and Zhu is silent but NEWMAN teaches “...wherein the localized position is determined based on a first difference between vectors representing the first transmit signal and the second transmit signal and a second difference between vectors representing the first predetermined position and the second predetermined position;...”. (see paragraph 90-98 and 100-104 where the horizontal lidar scans for different objects can show vectors and the difference of vectors to show the change in the position of the objects and this can provide a speed and velocity of the object using two or more lidar scans)
PNG
media_image2.png
568
476
media_image2.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of NEWMAN with the disclosure of ZHU since NEWMAN teaches that a first LIDAR device and a second LIDAR device can be determined to provide a map of the environment 202, 204 and 301. The vectors of the objects can also be shown. See paragraph 104. Then a linear and rotational velocities of the object can be determined in block 406 and the localization of the vehicle can be determined in block 412. The vehicle can be then localized in the point cloud using the scans. See claims 1-8.
Zhu discloses “...2. The LIDAR sensor system of claim 1, wherein the one or more processors are configured to determine a velocity of the vehicle based on the first return signal and the second return signal. (See paragraph 45 where the speed of the object can be determined from the LIDAR device such as a bike that is in a range of 0.1 mph to under 40 MPH)”.
Zhu discloses “...3. The LIDAR sensor system of claim 1, wherein the database comprises a geographic information system (GIS) database. (see paragraph 52-53)
Zhu discloses “...4. The LIDAR sensor system of claim 1, wherein the one or more processors are configured to retrieve the first predetermined position by cross-referencing a vicinity of the vehicle with locations of surveyed objects in the vicinity. (See paragraph 42-58 and FIG. 5b where the LIDAR beams can determine the vehicle position and the position of the truck and adjust the position of the vehicle in response to the position of the truck and to maintain the vehicle in the lane and correct a position of the vehicle )”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 5 is rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™ and in further in view of United States Patent No.: US9257048B1 to Offer that was filed in 2010 and in view of Ditty and in view of Prokorov and Newman.
PNG
media_image8.png
548
864
media_image8.png
Greyscale
Zhu is silent but Offer teaches “...5. The LIDAR sensor system of claim 1, wherein the one or more processors are configured to retrieve the first predetermined position as a three-dimensional (3D) vector. (see routing that uses sensors to provide the positioning of the aircraft in col. 18, line 45 to col. 19, line 10 and FIG. 10 can provide the trajectory to the aircraft to anew landing site and FIG. 6-8) (see FIG. 21 where the tower server provides the 4d flight path from the current position to a new landing zone as in block 2112-2110)(see FIG. 10 where the ground station operator provides the route to the alternative landing site in blocks 1000-1008 and messages m1 -m7)(see FIG. 10 where the ground station operator provides the route to the alternative landing site in blocks 1000-1008 and messages m1 -m7)"
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of Offer of BOEING TM with the disclosure of ZHU since Offer teaches that an emergency can be detected in block 900, m1. Then a suitable airport can be provided that is not the ultimate destination but instead is an emergency airport. A ground station operator 1004 can provide an input to provide an alternative landing site and an automatic landing can be provided via messages M4- M7. The process then displays the landing site identified from the group of landing sites on a display system (operation 806) with the process terminating thereafter. At this point, the operator of the aircraft may confirm the selection of the landing site. Further, the operator may direct the aircraft to the landing site or may employ a control system, such as an autopilot, to fly the aircraft to the landing site. This can provide a safe emergency landing that is an autopilot from an input from the ground station via a 4 d trajectory.
Claims 6-7 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™ and in view of Ditty and in view of Prokorov and Newman.
Zhu discloses “...6. The LIDAR sensor system of claim 1, wherein:
the one or more scanning optics are configured to output the first transmit signal at a first angle and to output the second transmit signal at a second angle different from the first angle; and(see paragraph 38-42 where the each of the LIDAR beams can be at 60 degrees or 30 degrees and 50-10 degrees at a range of 200 meters or 100 meters or less)
the one or more processors are configured to determine the position of the vehicle further based on the first angle and the second angle”. (see paragraph 38-42 where the each of the LIDAR beams can be at 60 degrees or 30 degrees and 50-10 degrees at a range of 200 meters or 100 meters or less)
PNG
media_image1.png
780
1022
media_image1.png
Greyscale
Claim 6 is amended to recite and Zhu is silent but NEWMAN teaches “...determine a localized position of the vehicle based on the first angle and the second angle..”. (see paragraph 90-98 and 100-104 where the horizontal lidar scans for different objects can show vectors and the difference of vectors to show the change in the position of the objects and this can provide a speed and velocity of the object using two or more lidar scans)
PNG
media_image2.png
568
476
media_image2.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of NEWMAN with the disclosure of ZHU since NEWMAN teaches that a first LIDAR device and a second LIDAR device can be determined to provide a map of the environment 202, 204 and 301. The vectors of the objects can also be shown. See paragraph 104. Then a linear and rotational velocities of the object can be determined in block 406 and the localization of the vehicle can be determined in block 412. The vehicle can be then localized in the point cloud using the scans. See claims 1-8.
Zhu discloses “..7. The LIDAR sensor system of claim 1, wherein the first object is a stationary object. (see paragraph 32 where the vehicle can detect a stationary tree or a road traffic signal)”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 8-10 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™ and in view of United States Patent Application Pub. No.: US 20160274589 A1 to Droz that was filed in 2016 and is assigned to GOOGLE™ and in view of Ditty and in view of Prokorov and in view of Newman.
PNG
media_image9.png
788
690
media_image9.png
Greyscale
Droz teaches “...8. The LIDAR sensor system of claim 1, further comprising:
a laser source (See FIG. 1 block 128) configured to generate a beam; and a modulator configured to apply at least one of phase modulation or frequency modulation to the beam to generate the first transmit signal”. (See FIG. 5b where the lidar can determine that there are features that cannot be scanned appropriately and then temporarily increase the pulse rate of the Lidar device beyond the predetermined pulse rate in blocks 502-524 and then when the heat is too much, the pulse rate can be reduced once the angular resolution of the zone of interest is scanned effectively in block 510)”.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of DROZ of GOOGLE™ with the disclosure of ZHU of GOOGLE™ since DROZ teaches that a LIDAR device due to dust, or rain or snow or a reason cannot perceive the object correctly. Then a processor can determine to provide an increase pulse rate or increased slew rate for the lidar emissions for that “poor region” to obtain an enhanced amount of resolution. Then after receiving the increase resolution, the lidar device can be too hot and then the pulse rate can be reduced to prevent damaging the LIDAR device. See paragraph 1-5 and 30-50 and the abstract of Droz.
Droz teaches “...9. The LIDAR sensor system of claim 1, wherein the one or more processors are configured to determine a three dimensional (3D) point cloud based on the first return signal and the second return signal. (See FIG. 5b where the lidar can determine that there are features that cannot be scanned appropriately and then temporarily increase the pulse rate of the Lidar device beyond the predetermined pulse rate in blocks 502-524 and then when the heat is too much, the pulse rate can be reduced once the angular resolution of the zone of interest is scanned effectively in block 510)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of DROZ of GOOGLE™ with the disclosure of ZHU of GOOGLE™ since DROZ teaches that a LIDAR device due to dust, or rain or snow or a reason cannot perceive the object correctly. Then a processor can determine to provide an increase pulse rate or increased slew rate for the lidar emissions for that “poor region” to obtain an enhanced amount of resolution. Then after receiving the increase resolution, the lidar device can be too hot and then the pulse rate can be reduced to prevent damaging the LIDAR device. See paragraph 1-5 and 30-50 and the abstract of Droz.
Droz teaches “...10. The LIDAR sensor system of claim 1, wherein the one or more processors are configured to:
receive a plurality of return signals comprising the first return signal, the second return signal, and at least one third return signal from reflection of at least one third transmit signal by at least one third object; and
randomly select the first return signal and the second return signal from the plurality of return signals for determination of the position of the vehicle”. (See paragraph 42 and FIG. 5b where the lidar can determine that there are features that cannot be scanned appropriately and then temporarily increase the pulse rate of the Lidar device beyond the predetermined pulse rate in blocks 502-524 and then when the heat is too much, the pulse rate can be reduced once the angular resolution of the zone of interest is scanned effectively in block 510 so the device can select 1. The initial pulse rate return signal 2. The second higher pulse rate signal that is enhanced or 3. The cooling down phase third pulse rate signal where an enhanced signal is desired but the LIDAR is too hot and it needs to be reduced and see paragraph 160-169)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of DROZ of GOOGLE™ with the disclosure of ZHU of GOOGLE™ since DROZ teaches that a LIDAR device due to dust, or rain or snow or a reason cannot perceive the object correctly. Then a processor can determine to provide an increase pulse rate or increased slew rate for the lidar emissions for that “poor region” to obtain an enhanced amount of resolution. Then after receiving the increase resolution, the lidar device can be too hot and then the pulse rate can be reduced to prevent damaging the LIDAR device. See paragraph 1-5 and 30-50 and the abstract of Droz.
PNG
media_image1.png
780
1022
media_image1.png
Greyscale
Claim 10 is amended to recite and Zhu is silent but NEWMAN teaches “...for determination of the localized position of the vehicle...”. (see paragraph 90-98 and 100-104 where the horizontal lidar scans for different objects can show vectors and the difference of vectors to show the change in the position of the objects and this can provide a speed and velocity of the object using two or more lidar scans)
PNG
media_image2.png
568
476
media_image2.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of NEWMAN with the disclosure of ZHU since NEWMAN teaches that a first LIDAR device and a second LIDAR device can be determined to provide a map of the environment 202, 204 and 301. The vectors of the objects can also be shown. See paragraph 104. Then a linear and rotational velocities of the object can be determined in block 406 and the localization of the vehicle can be determined in block 412. The vehicle can be then localized in the point cloud using the scans. See claims 1-8.
Claims 11-13 and 18-19 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™ and in view of Ditty and in view of Prokorov and in view of Newman.
PNG
media_image3.png
820
598
media_image3.png
Greyscale
In regard to claim 11 and 18, Zhu discloses “...11. An autonomous vehicle control system, comprising:
a LIDAR sensor system coupled with a vehicle, the LIDAR sensor system (see abstract and claims 1-10 and FIG. 4a where the lidar has a mirror and a spinning configuration and provides a first lidar beam 421a and a second 422a and a third 423a and a fourth beam 430 and see paragraph 38-41)
comprising one or more processors configured to: (see claims 1-12)
receive a first return signal from reflection of a first transmit signal from the LIDAR sensor system by a first object; ; (see paragraph 32 and 38-41 where the return signal of the first through fourth beams can provide a range to the vehicle and the object surfaces)
receive a second return signal from reflection of a second transmit signal from the LIDAR sensor system by a second object; (see paragraph 32 and 38-41 where the return signal of the first through fourth beams can provide a range to the vehicle and the object surfaces)
retrieve a first predetermined position of the first object and a second predetermined position of the second object; (see paragraph 32-42 where the range can be 10-200 meters to detect 1. And track 2. Movements of the pedestrians and bikes and objects and other vehicles in the road) (see paragraph 32-42 where the range can be 10-200 meters to detect 1. And track 2. Movements of the pedestrians and bikes and objects and other vehicles in the road)
PNG
media_image4.png
850
1204
media_image4.png
Greyscale
determine a position of the vehicle based on the first return signal, the second return signal, the first predetermined position, and the second predetermined position; and , (See paragraph 42-58 and FIG. 5b where the LIDAR beams can determine the vehicle position and the position of the truck and adjust the position of the vehicle in response to the position of the truck and to maintain the vehicle in the lane and correct a position of the vehicle ) (see paragraph 32-42 where the range can be 10-200 meters to detect 1. And track 2. Movements of the pedestrians and bikes and objects and other vehicles in the road)
control operation of at least one of a steering system or a braking system of the vehicle based on the position. . (See paragraph 42-58 and FIG. 5b where the LIDAR beams can determine the vehicle position and the position of the truck and adjust the position of the vehicle in response to the position of the truck and to maintain the vehicle in the lane and correct a position of the vehicle )
PNG
media_image5.png
914
1356
media_image5.png
Greyscale
Zhu is silent but Ditty teaches “...receive from a database a first predetermined position of the first object and a second predetermined position of the second object”. (see FIG. 42 where the cloud includes an HP MAP and a cloud lane graph and drive matching and tiling functions for calibrating the telemetry of the vehicle’s camera, lidar and radar and IMU with the cloud data in blocks 3022 and see paragraph 322-340 where the vehicle can interface with a supervisor computing system and can provide supervisory control and automatic braking of the vehicle and see FIG. 12 where the supervisor can detect a sign and then parse the sign as flashing lights indicating an ice condition and then provide those instructions to the autonomous vehicle)”.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of DITTY with the disclosure of ZHU with a reasonable expectation of success since DITTY teaches that an autonomous vehicle can include a map perception unit 3016 that can interface with a cloud for a calibration. Cloud mapping 3022 may receive inputs from a number of different contemporaneously and/or previously operating autonomous vehicles and other information sources, for example. The cloud mapping 3022 provides mapping outputs which are localized by localization 3026 based e.g., upon the particular location of the ego-vehicle, with the localized output used to help generate and/or update the world model 3002. The world model 3002 so developed and maintained in real-time is used for autonomous vehicle planning 3004, control 3006 and actuation 3008. Thus, a.route from an original location to a destination can be derived with a hazard and the server can provide assistance to the autonomous vehicle by sharing its database and can provide an automatic steering or braking to improve safety. See paragraph 489-500 of Ditty.
Zhu is silent but Prokorov teaches “..determine a localized position of the vehicle based on the first return signal, the second return signal and a first and the second predetermined position and provide a control signal to a control system for the vehicle for the control system to control the operation of the vehicle based on the position”. (see paragraph 15-19 where a localized position of the vehicle relative to the object can be made and a range from each of the position and orientation of the vehicle to the object and geographic features can be made using the return signal of the lidar and radar devices and see paragraph 5 and block 100 where a non-movable object can be detected and then a second vehicle can be detected and tracked as being within an obstructed region and then the vehicle can be controlled based on the 1. Immobile object 2. The obstructed region and 3. The tracked portion in a localized frame)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of PROKHOROV of TOYOTA MOTOR™ with the disclosure of ZHU since PROKHOROV teaches that a lidar can provide a localized coordinate system. In FIG. 3-4 a vehicle can provide a blocked obstructed area. This is an area where there is no line of sight. The vehicle can provide a tracking using a camera to track the vehicles behind the blocked obstruction area. This can provide a safe tracking of the objects using a lidar and a camera for a safer operation.
PNG
media_image1.png
780
1022
media_image1.png
Greyscale
Claim 11 and 18 are amended to recite and Zhu is silent but NEWMAN teaches “...wherein the localized position is determined based on a first difference between vectors representing the first transmit signal and the second transmit signal and a second difference between vectors representing the first predetermined position and the second predetermined position;...”. (see paragraph 90-98 and 100-104 where the horizontal lidar scans for different objects can show vectors and the difference of vectors to show the change in the position of the objects and this can provide a speed and velocity of the object using two or more lidar scans)
PNG
media_image2.png
568
476
media_image2.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of NEWMAN with the disclosure of ZHU since NEWMAN teaches that a first LIDAR device and a second LIDAR device can be determined to provide a map of the environment 202, 204 and 301. The vectors of the objects can also be shown. See paragraph 104. Then a linear and rotational velocities of the object can be determined in block 406 and the localization of the vehicle can be determined in block 412. The vehicle can be then localized in the point cloud using the scans. See claims 1-8.
In regard to claim 12 and 19, Zhu discloses “...12. The autonomous vehicle control system of claim 11, further comprising a sensor comprising at least one of an inertial navigation system (INS), a global positioning system (GPS) receiver, or a gyroscope, wherein the one or more processors of the LIDAR system are configured to determine the position based on sensor data received from the sensor. (see paragraph 25-27 where the device can include a laser based localization procedure or GPS or camera or INS sensor and gyroscope)
Zhu discloses “...13. The autonomous vehicle control system of claim 11, further comprising a polygon scanner configured to direct the first transmit signal and the second transmit signal into an environment in which the first object and the second object are located. (see paragraph 35 where the laser can include a 150 meter to 200 meter distance and a vertical field of view and a horizontal field of view of 30 degrees).
The phrase polygonal scanner is not in the specification as originally filed but is interpreted as “..a polygon laser scanner, is a device that uses a polygon mirror and a motor to scan a laser beam horizontally”.
Claims 14 and 20 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™ and in view of U.S. Patent No.: 11,402,834 B2 to Templeton et al that was filed in 2012 and in view of Ditty and in view of Prokorov and Newman.
In regard to claim 14, and claim 20, Templeton teaches “...14. The autonomous vehicle control system of claim 11, wherein the LIDAR sensor system is configured to output the first transmit signal at a first azimuth angle and to output the second transmit signal at a second azimuth angle. (see col. 12, lines 1-51 and 52-59 and col. 14, lines 60-67 and col. 18m lines 35-46)”.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of TEMPLETON of GOOGLE™ with the disclosure of ZHU of GOOGLE™ since TEMPLETON teaches that a LIDAR device can be on a t op of the vehicle and move the beams in the azimuth direction via an azimuth angle and horizontally and vertically for a wide scanning zone for increased safety.
Claims 15-17 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™ and in view of Ditty and in view of Prokorov and Newman.
Zhu discloses “...15. The autonomous vehicle control system of claim 11, wherein the one or more processors are configured to control operation of the at least one of the steering system or the braking system to avoid collision with a third object”. (See paragraph 42-58 and FIG. 5b where the LIDAR beams can determine the vehicle position and the position of the truck and adjust the position of the vehicle in response to the position of the truck and to maintain the vehicle in the lane and correct a position of the vehicle ) (see paragraph 32-42 where the range can be 10-200 meters to detect 1. And track 2. Movements of the pedestrians and bikes and objects and other vehicles in the road)
Zhu discloses “...16. The autonomous vehicle control system of claim 11, wherein the database comprises a GIS database. (see paragraph 25)”.
Claim 17 us rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of United States Patent Pub. No.: US20120083960A1 to ZHU et al that was filed in 2011 and assigned to GOOGLE ™ and in view of Manninen, A. J., O'Connor, E. J., Vakkari, V., and Petäjä, T.: A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland, Atmos. Meas. Tech., 9, 817–827, https://doi.org/10.5194/amt-9-817-2016, 2016 and in view of Ditty and in view of Prokorov and Newman.
PNG
media_image10.png
496
786
media_image10.png
Greyscale
The Manninen publication teaches “...17. The autonomous vehicle control system of claim 11, wherein the one or more processors are configured to determine a Doppler value of the first object based on the first return signal”. (see abstract and section 2 where the doppler velocity is measured from the lidar device detector and see section 3.2 where the background SNR correction algorithm is applied ot the lidar to remove the backscattering value and 1. Screen and 2. Correct the step changes and shapes and remove the outlier profiles and provide a correction signal.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of the Manninen publication with the disclosure of ZHU since the publication teaches that a LIDAR device can include HALO algorithm for detecting and compensation the signal for the doppler LIDAR for an increased accuracy. See page 2-5,
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEAN PAUL CASS whose telephone number is (571)270-1934. The examiner can normally be reached Monday to Friday 7 am to 7 pm; Saturday 10 am to 12 noon.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A. Browne can be reached on 571-270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEAN PAUL CASS/Primary Examiner, Art Unit 3668