DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-19 are pending in this application.
Claims 1, 3, 8, and 12-19 are presented as currently amended claims.
No claims are newly presented.
No claims are cancelled.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 4, 7-11, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 20210132612 A1) in view of Golov (US 20210325898 A1) in view of Kim et al. (US 20200160735 A1) in view of Lee (US 20210134156 A1) (the combination of these references referred to as combination Wang hereinafter). As regards the individual claims:
With respect to claim 1, Wang teaches an autonomous driving system, comprising:
an unmanned aerial vehicle (UAV) in air, wherein the UAV includes: at least one UAV camera configured to collect visual images of ground objects, (Wang: ¶ 024; "embodiments may use images captured by a drone, or other unmanned aerial vehicle (UAV), which can capture overhead images directly, without any transformation or the associated inaccuracies.") . . . a UAV processor configured to convert first raw sensor information to first ground traffic information, wherein the first raw sensor information includes the visual images of the ground objects (Wang: ¶ 038; “device 110 [can] receive information from one or more of the movable object 104, . . . such as image data captured by a payload camera”) and the distance information between the ground objects and the UAV, (Wang: ¶ 066; "aerial vehicle and the ground vehicle may each be configured to map the roadway environment using Simultaneous Localization and Mapping (SLAM) techniques to generate a local map. The SLAM techniques may use data captured by the scanning sensors and/or various other sensors, such as as an IMU, a gyroscope, or other suitable sensors. The SLAM generated by the aerial vehicle can be transmitted to the ground vehicle and combined with the SLAM generated by the ground vehicle to extend the local map generated by the ground vehicle. In some embodiments, traffic conditions (e.g., weather conditions, traffic density, etc.)") a UAV communication module configured to transmit the first ground traffic information, wherein the first ground traffic information includes one or more accessible areas from a perspective of the UAV; and (Wang: ¶ 068; "scanning data can be obtained by a plurality of aerial vehicles in communication with the ground vehicle, and the scanning data can be transmitted by the plurality of aerial vehicles to the ground vehicle.") . . . a land vehicle communication module configured to receive the first ground traffic information from the UAV, (Wang: ¶ 076; "drone can combine its map with the high precision map received from the car to obtain a high precision map it can use to navigate through its environment (for example to avoid object, generate routes, etc. Embodiments take advantage of the higher quality sensors and computers available in a car or other ground vehicle to improve the maps available to a drone or other UAV.") wherein the land vehicle processor is further configured to release the UAV to a position of following the land vehicle or leading the land vehicle, (Wang: ¶ 055; “control data can be received from client device 110 instructing the aerial vehicle to move to a particular position.”)wherein the land vehicle processor is further configured to generate one or more land vehicle planning results from the first ground traffic information and the second ground traffic information, (Wang: ¶ 068; “scanning data can be obtained by a plurality of aerial vehicles in communication with the ground vehicle, and the scanning data can be transmitted by the plurality of aerial vehicles to the ground vehicle . . . second scanning data includes second mapping data generated based on point cloud data )
Wang does not explicitly teach: at least one UAV LiDAR sensor configured determine distance information between the ground objects and the UAV; however, Golov does teach:
at least one UAV LiDAR sensor configured determine distance information between the ground objects and the UAV; (Golov: ¶ 076; “sensor (e.g., at least one of sensors 126, 132, 230, 238) is a light detection and ranging (LiDAR) sensor, a radar sensor, or a camera”)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Golov with the teachings of Wang because simple substitution of one known element for another to obtain predictable results (KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 416; MPEP § 2143(I)(B)). In the instant case, Wang contains a system which differs from the claimed limitation by the substitution of a spatial measurement sensor (Wang: ¶ 033; “sensing system 118 can include one or more sensors that may sense the spatial disposition . . . of the movable object), but Golov shows that radar and LiDAR were known in the art as spatial measurement sensors and one of ordinary skill in the art could have substituted one known element for another, and the results of the substitution would have been predictable. Consequently, the combination is obvious to a person of ordinary skill in the art.
Wang does not explicitly teach: . . . a land vehicle processor configured to convert the second raw sensor information to second ground traffic information, wherein the second ground traffic information includes one or more accessible areas from a perspective of the land vehicle, wherein each of the one or more accessible areas, from the perspective of the UAV and the land vehicle, are determined based on traffic rules and motion information of the ground objects, wherein the motion information includes acceleration information or velocity information of the ground objects and one or more moving objects surrounding the land vehicle, wherein the determined one or more accessible areas are dynamically adjusted based on the traffic rules and the motion information of the ground objects, and wherein the ground objects are located on a route between the land vehicle and a destination, . . . however, Kim does teach:
. . . a land vehicle processor configured to convert the second raw sensor information to second ground traffic information, (Kim: 039; obtaining sensor data from the UAV in operation [which is] indicative of conditions ahead of the emergency vehicle along the calculated route) wherein the second ground traffic information includes one or more accessible areas from a perspective of the land vehicle, wherein each of the one or more accessible areas, from the perspective of the UAV and the land vehicle, are determined based on traffic rules (Kim: 015; legal speed limits therefor may be used to calculate a travel path) and motion information of the ground objects, wherein the motion information includes acceleration information or velocity information of the ground objects (Kim: 042; sensor data may indicate the presence of a partial or full traffic obstruction along the calculated route. The traffic obstruction may be, for example, a car accident partially or fully blocking the road, a fallen tree)and one or more moving objects surrounding the land vehicle, (Kim: 045; traffic conditions such as the number of cars on the roads, the speed in which the cars are traveling on the roads) wherein the determined one or more accessible areas are dynamically adjusted based on the traffic rules and the motion information of the ground objects, (Kim: 0949; recalculating of the route includes determining how the calculated route can be changed to shorten a response time of the emergency vehicle to the location of the emergency by altering the route and changing traffic conditions along the altered route) and wherein the ground objects are located on a route between the land vehicle and a destination, (Kim: 042; accident partially or fully blocking the road)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Kim with the teachings of Wang because doing so would predictably result “the travel time to the location of the emergency [being] reduced” when encountering traffic. (Kim : ¶ 018).
Wang does not explicitly teach: directing the land vehicle to the destination and wherein the land vehicle planning results comprise one or more instructions; however, Lee does teach:
directing the land vehicle to the destination and wherein the land vehicle planning results comprise one or more instructions. (Lee: ¶ 171; autonomous device [vehicle] can generate a route for self-driving on the basis of acquired data. The autonomous device [vehicle] can generate a driving plan for traveling along the generated route.).
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Wang with the improvement of Lee with a reasonable expectation of success because the use of a known technique to improve similar methods in the same way is obvious (KSR Int'l Co. v. Teleflex Inc., 550 U.S. at 417, 82 USPQ2d at 1396.) In the instant case, both Wang's and Lee's base methods are similar traffic flow improvement systems; however, the combined device would result in more complete navigation instructions when compared with Wang teaching of situational driving instructions (Wang: ¶ 058; " navigation commands can be generated based on the object identified in the scanning data."), which would result in better informing the user.
Regarding claim 4, as detailed above, combination Wang teaches the invention as detailed with respect to claim 3. Wang further teaches:
wherein the UAV processor is further configured to determine UAV planning results (Wang: ¶ 076; "drone can combine its map with the high precision map received from the car to obtain a high precision map it can use to navigate through its environment”)
And Kim further teaches: including a path of the land vehicle to the destination based on the first ground traffic information. (Kim: ¶ 044; "Operation S27 may include recalculating the route between the received present location of the emergency vehicle and the received location of the emergency site using the area map data and the obtained sensor data from the UAV. The recalculated route may be for example, the travel path which would take the least amount of time (e.g., fastest time path) to get the emergency vehicle from its present location to the location of the emergency site. The recalculated route may consider the area map data and the obtained sensor data from the UAV.") (Kim: ¶ 043; "operation S25 the sensor data obtained from the UAV may be indicative of conditions ahead of the emergency vehicle along other routes (e.g., not the route which the emergency vehicle is currently taking).")
Regarding claim 7, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang further teaches:
wherein the second ground traffic information is formatted in a coordinate system with a position of the land vehicle as a coordinate origin. (Wang: ¶ 079; "Using its scanning sensors 132, the ground vehicle can obtain a position of the aerial vehicle so long as it is operating within range of the ground vehicle's sensors. The position may be relative to the ground vehicle or may be an absolute position in the world coordinate system or other coordinate system")
Regarding claim 8, as detailed above, combination Wang teaches the invention as detailed with respect to claim 7. Wang further teaches:
wherein the second ground traffic information further includes: second position information that indicates locations of one or more still objects surrounding the land vehicle (Wang: ¶ 035; "a LiDAR sensor can be configured to collect point cloud data representing a 360-degree view of the ambient environment of the vehicle. Similarly, a high definition imaging sensor can collect image data (e.g., still images and video) of the environment around the ground vehicle. Scanning sensor 132 may be coupled to the ground vehicle 111 at a forward position to capture scanning data of the environment directly in front of the ground vehicle. For example, scanning sensor 132 may collect scanning data related to the roadway environment in which the ground vehicle is operating (e.g., identify roadway objects (such as lane markings, other vehicles, trees and other objects present in the roadway environment), driving conditions (such as weather conditions), traffic information (including information related to nearby vehicles)") in the coordinate system with the position of the land vehicle as the coordinate origin; (Wang: ¶ 079; "Using its scanning sensors 132, the ground vehicle can obtain a position of the aerial vehicle so long as it is operating within range of the ground vehicle's sensors. The position may be relative to the ground vehicle or may be an absolute position in the world coordinate system or other coordinate system") second motion information that indicates velocities of the one or more moving objects surrounding the land vehicle; (Lee: ¶ 148; "object detection device 210 can generate information about objects outside the vehicle 10. Information about an object can include at least one of information on presence or absence of the object, positional information of the object, information on a distance between the vehicle 10 and the object, and information on a relative speed of the vehicle 10 with respect to the object.") second predicted trajectories of the one or more moving objects surrounding the land vehicle; (Lee: ¶ 010; "The predicting the dangerous situation may include predicting a moving line of an adjacent vehicle and a pedestrian corresponding to the object, and predicting a collision risk between the pedestrian and the adjacent vehicle based on the predicted moving line of the pedestrian and the predicted moving line the adjacent vehicle.") and second status information that indicates statuses of one or more traffic signals; (Lee: ¶ 330; "monitoring information includes object information, traffic facility information, meta information, and the like. The object information may include information of vehicles and pedestrians adjacent to the vehicle 10, and may include information of surrounding buildings and facilities. The traffic facility information may include lane information, traffic light information, traffic facility information,")
Regarding claim 9, as detailed above, combination Wang teaches the invention as detailed with respect to claim 8. Wang further teaches:
wherein the land vehicle processor is further configured to determine the land vehicle planning results including a path of the land vehicle to the destination based on the second ground traffic information. (Wang: ¶ 058; "When roadway objects are identified in the scanning data received from the aerial vehicle, the positions of the objects can be mapped to their corresponding positions in the scanning data captured by the ground vehicle. In some embodiments, navigation commands can be generated based on the object identified in the scanning data. For example, when a lane is detected in the scanning data, a polynomial curve can be calculated that fits the detected lane in the scanning data. The polynomial curve can be used to determine a trajectory for the ground vehicle to follow to stay within the lane. This can be provided to navigation controller 328 which can generate movement commands for the ground vehicle.")
Regarding claim 10, as detailed above, combination Wang teaches the invention as detailed with respect to claim 8. Wang further teaches:
wherein the land vehicle processor is further configured to determine the land vehicle planning results including one or more instructions (Wang: ¶ 044; "the roadway environment 200 can include the ground vehicle driving on a roadway and an aerial vehicle flying above the roadway. As discussed, each vehicle can include one or more scanning sensors to collect scanning data from the roadway environment. One use of the scanning data is to perform various autonomous driving and/or assisted driving functions, such as lane detection.") in response to at least a portion of the second motion information that indicates the velocities or accelerations of the one or more moving objects surrounding the land vehicle. (Lee: ¶ 010; "The predicting the dangerous situation may include predicting a moving line of an adjacent vehicle and a pedestrian corresponding to the object, and predicting a collision risk between the pedestrian and the adjacent vehicle based on the predicted moving line of the pedestrian and the predicted moving line the adjacent vehicle.")
Regarding claim 11, as detailed above, combination Wang teaches the invention as detailed with respect to claim 8. Wang further teaches:
wherein the land vehicle processor is further configured to determine the land vehicle planning results including one or more parameters of the land vehicle based on the second ground traffic information, (Wang: ¶ 044; "the roadway environment 200 can include the ground vehicle driving on a roadway and an aerial vehicle flying above the roadway. As discussed, each vehicle can include one or more scanning sensors to collect scanning data from the roadway environment. One use of the scanning data is to perform various autonomous driving and/or assisted driving functions, such as lane detection.") and wherein the one or parameters include at least one of velocity, acceleration, and direction. (Lee: ¶ 010; "The predicting the dangerous situation may include predicting a moving line of an adjacent vehicle and a pedestrian corresponding to the object, and predicting a collision risk between the pedestrian and the adjacent vehicle based on the predicted moving line of the pedestrian and the predicted moving line the adjacent vehicle.")
Regarding claim 16, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang further teaches:
wherein the land vehicle processor is further configured to combine the first ground traffic information and the second ground traffic information to generate a world model and generate coordinated planning results based on the world model. (Wang: ¶ 065; "the scanning data may include point cloud data. The point cloud data may be a three-dimensional representation of the target environment (e.g., the roadway environment). This 3D representation can be divided into voxels (e.g., 3D pixels). Each point in the point cloud of the mapping data is associated with a position in the scanner reference frame that is determined relative to the scanning sensor. The positioning data of the movable object, produced by the positioning sensor, may then be used to convert this position in the scanner reference frame to the output reference frame in a world coordinate system.") (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.")
Regarding claim 17, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang further teaches:
wherein the land vehicle processor is further configured to: convert the first ground traffic information from a first coordinate system with a position of the UAV as a coordinate origin to a second coordinate system with a position of the land vehicle as the coordinate origin, (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.") (Wang: ¶ 079; "Using its scanning sensors 132, the ground vehicle can obtain a position of the aerial vehicle so long as it is operating within range of the ground vehicle's sensors. The position may be relative to the ground vehicle or may be an absolute position in the world coordinate system or other coordinate system") convert first coordinates of one or more still objects, one or more moving objects, (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.") one or more traffic signals, and the one or more accessible areas, and predicted trajectories of the one or more moving objects identified in the first coordinate system to second coordinates in the second coordinate system; and (Lee: ¶ 330; "monitoring information includes object information, traffic facility information, meta information, and the like. The object information may include information of vehicles and pedestrians adjacent to the vehicle 10, and may include information of surrounding buildings and facilities. The traffic facility information may include lane information, traffic light information, traffic facility information,") (Lee: ¶ 010; "The predicting the dangerous situation may include predicting a moving line of an adjacent vehicle and a pedestrian corresponding to the object, and predicting a collision risk between the pedestrian and the adjacent vehicle based on the predicted moving line of the pedestrian and the predicted moving line the adjacent vehicle.") determine third coordinates of the one or more still objects, the one or more moving objects, the one or more traffic signals, and the one or more accessible areas in a world model based on the second coordinates and the second ground traffic information. (Wang: ¶ 065; "the scanning data may include point cloud data. The point cloud data may be a three-dimensional representation of the target environment (e.g., the roadway environment). This 3D representation can be divided into voxels (e.g., 3D pixels). Each point in the point cloud of the mapping data is associated with a position in the scanner reference frame that is determined relative to the scanning sensor. The positioning data of the movable object, produced by the positioning sensor, may then be used to convert this position in the scanner reference frame to the output reference frame in a world coordinate system.") (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.")
Regarding claim 18, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang further teaches:
wherein the land vehicle processor is further configured to: convert first semantic segments of one or more still objects, one or more moving objects, one or more traffic signals, and the one or more accessible areas identified in a first coordinate system to second semantic segment in a second coordinate system; and (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.") determine third semantic segments of the one or more still objects, the one or more moving objects, (Wang: ¶ 021; "a self-driving vehicle can analyze its surrounding environment based on data gathered by one or more sensors mounted on the vehicle, including, e.g., visual sensors, LiDAR sensors, millimeter wave radar sensors, ultrasound sensors, etc. The sensor data can be analyzed using image processing tools, machine learning techniques, etc. to determine depth information and semantic information, to assist the vehicle in identifying surrounding people and objects") the one or more traffic signals, and the one or more accessible areas and predicted trajectories of the one or more moving objects (Lee: ¶ 330; "monitoring information includes object information, traffic facility information, meta information, and the like. The object information may include information of vehicles and pedestrians adjacent to the vehicle 10, and may include information of surrounding buildings and facilities. The traffic facility information may include lane information, traffic light information, traffic facility information,") (Lee: ¶ 010; "The predicting the dangerous situation may include predicting a moving line of an adjacent vehicle and a pedestrian corresponding to the object, and predicting a collision risk between the pedestrian and the adjacent vehicle based on the predicted moving line of the pedestrian and the predicted moving line the adjacent vehicle.") identified in a world model based on the second semantic segment and the second ground traffic information. (Wang: ¶ 065; "the scanning data may include point cloud data. The point cloud data may be a three-dimensional representation of the target environment (e.g., the roadway environment). This 3D representation can be divided into voxels (e.g., 3D pixels). Each point in the point cloud of the mapping data is associated with a position in the scanner reference frame that is determined relative to the scanning sensor. The positioning data of the movable object, produced by the positioning sensor, may then be used to convert this position in the scanner reference frame to the output reference frame in a world coordinate system.")
Regarding claim 19, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang further teaches:
wherein the land vehicle processor is further configured to: convert first point clouds of one or more still objects, one or more moving objects, one or more traffic signals, and the one or more accessible areas identified in a first coordinate system to second point clouds in a second coordinate system; and determine third point clouds of the one or more still objects, the one or more moving objects, the one or more traffic signals, and the one or more accessible areas (Wang: ¶ 035; "a LiDAR sensor can be configured to collect point cloud data representing a 360-degree view of the ambient environment of the vehicle . . . sensor 132 may collect scanning data related to the roadway environment in which the ground vehicle is operating (e.g., identify roadway objects (such as lane markings, other vehicles, trees and other objects present in the roadway environment), driving conditions (such as weather conditions), traffic information (including information related to nearby vehicles),") (Wang: ¶ 065; "the point cloud data obtained by the aerial vehicle can be used to augment the point cloud data obtained by the ground vehicle") (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.") (Wang: ¶ 083; "real-time maps may be merged by the ground vehicle, by identifying features in the image-based real-time map to features in the point cloud-based real-time map. . . . The ground vehicle can convert the coordinate system of the image-based real-time map to match the coordinate system of the point cloud data collected by the ground vehicle.") and predicted trajectories of the one or more moving objects identified (Lee: ¶ 010; "The predicting the dangerous situation may include predicting a moving line of an adjacent vehicle and a pedestrian corresponding to the object, and predicting a collision risk between the pedestrian and the adjacent vehicle based on the predicted moving line of the pedestrian and the predicted moving line the adjacent vehicle.") in a world model based on the second point clouds and the second ground traffic information. (Wang: ¶ 065; "the scanning data may include point cloud data. The point cloud data may be a three-dimensional representation of the target environment (e.g., the roadway environment). This 3D representation can be divided into voxels (e.g., 3D pixels). Each point in the point cloud of the mapping data is associated with a position in the scanner reference frame that is determined relative to the scanning sensor. The positioning data of the movable object, produced by the positioning sensor, may then be used to convert this position in the scanner reference frame to the output reference frame in a world coordinate system.") (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.")
Claims 2-3 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Golov in view of Lee as applied to claims 1 and 3 respectively above, and further in view of Radetzki (US 20210016735 A1) (the combination of which is referred to as combination Wang hereinafter).
Regarding claim 2, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang does not explicitly teach:
wherein the first ground traffic information is formatted in a coordinate system with a position of the UAV as a coordinate origin; however, Radetzki does teach:
wherein the first ground traffic information is formatted in a coordinate system with a position of the UAV as a coordinate origin. (Radetzki: ¶ 079; "a position of an object (for example of the object) is an absolute position or a relative position (for example relative to the unmanned vehicle). Such a position can be represented, for example, in the form of a position specification, which refers, for example, to an arbitrarily specified absolute or relative coordinate system. A position of an object relative to the unmanned vehicle can be determined, for example, based on a sequence of multiple images captured by an optical sensor (for example a camera) of the unmanned vehicle.").
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Wang with the teachings of Radetzki with a reasonable expectation of success because the use of a known technique to improve similar methods in the same way is obvious (KSR Int'l Co. v. Teleflex Inc., 550 U.S. at 417, 82 USPQ2d at 1396.) In the instant case, both Wang's and Radetzki's base methods are similar vehicle control systems based on object recognition; however, the combined device would be improved by considering a coordinate systems originating at the UAV’s position because doing so would improve speed of processing.
Regarding claim 3, as detailed above, combination Wangi teaches the invention as detailed with respect to claim 2. Wang further teaches:
wherein the first ground traffic information includes at least one of: a position of the land vehicle in the coordinate system; (Wang: ¶ 083; "ground vehicle can convert the coordinate system of the image-based real-time map to match the coordinate system of the point cloud data collected by the ground vehicle. The ground vehicle can then return a combined map of the matched overlapping portion to the aerial vehicle to be used for navigation.") (Wang: ¶ 092; "The first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle. In some embodiments, receiving, by an aerial vehicle, a first real-time map from a ground vehicle, wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.") first position information that indicates locations of one or more still objects on the ground in the coordinate system with the position of the UAV as the coordinate origin; (Wang: ¶ 049; "reference object 202 can be identified in the vehicle scanning data. The reference object can be associated with a coordinate in the scanning data (e.g., an image coordinate system based on pixels). For example, the example front-view shown in FIG. 2B is a projection of the roadway environment 200 onto a two-dimensional image plane. Using the intrinsic parameters (e.g., focal length, lens parameters, etc.) and extrinsic parameters (e.g., position, orientation, etc.) of the scanning sensor of the ground vehicle, a projection matrix can be used to convert from the image coordinate system of then two-dimensional image plane to the world coordinate system of the three-dimensional roadway environment") (Wang: ¶ 079; "Using its scanning sensors 132, the ground vehicle can obtain a position of the aerial vehicle so long as it is operating within range of the ground vehicle's sensors. The position may be relative to the ground vehicle or may be an absolute position in the world coordinate system or other coordinate system") (Wang: Fig. 2B; [showing tree as a still object]) first motion information that indicates velocities of one or more moving objects on the ground; (Lee: ¶ 148; "object detection device 210 can generate information about objects outside the vehicle 10. Information about an object can include at least one of information on presence or absence of the object, positional information of the object, information on a distance between the vehicle 10 and the object, and information on a relative speed of the vehicle 10 with respect to the object.") first predicted trajectories of the one or more moving objects on the ground; (Lee: ¶ 010; "The predicting the dangerous situation may include predicting a moving line of an adjacent vehicle and a pedestrian corresponding to the object, and predicting a collision risk between the pedestrian and the adjacent vehicle based on the predicted moving line of the pedestrian and the predicted moving line the adjacent vehicle.") first status information that indicates statuses of one or more traffic signals (Lee: ¶ 330; "monitoring information includes object information, traffic facility information, meta information, and the like. The object information may include information of vehicles and pedestrians adjacent to the vehicle 10, and may include information of surrounding buildings and facilities. The traffic facility information may include lane information, traffic light information, traffic facility information,")
Regarding claim 5, as detailed above, combination Wang teaches the invention as detailed with respect to claim 3. Wang further teaches:
wherein the UAV processor is further configured to determine UAV planning results including one or more instructions in response to at least a portion of the first motion information that indicates the velocities or accelerations of the one or more moving objects. (Wang: ¶ 076; "drone can also generate a map based on its onboard sensors as it flies nearby. However, this is a lower precision map, due to the types of sensors carried by the drone, and the computing resources available to the drone. However, the drone can combine its map with the high precision map received from the car to obtain a high precision map it can use to navigate through its environment (for example to avoid object, generate routes, etc. Embodiments take advantage of the higher quality sensors and computers available in a car or other ground vehicle to improve the maps available to a drone or other UAV.")
Regarding claim 6, as detailed above, combination Wang teaches the invention as detailed with respect to claim 3. Wang further teaches:
wherein the UAV processor is further configured to determine UAV planning results including one or more parameters of the land vehicle based on the first ground traffic information, and wherein the one or parameters include at least one of velocity, acceleration, and direction. (Wang: ¶ 076; "drone can also generate a map based on its onboard sensors as it flies nearby. However, this is a lower precision map, due to the types of sensors carried by the drone, and the computing resources available to the drone. However, the drone can combine its map with the high precision map received from the car to obtain a high precision map it can use to navigate through its environment (for example to avoid object, generate routes, etc. Embodiments take advantage of the higher quality sensors and computers available in a car or other ground vehicle to improve the maps available to a drone or other UAV.")
Claims 12-15 is rejected as being unpatentable over combination Wang s applied to claim 1, and further in view of Guney et al. (US 20230230471 A1).
Regarding claim 12, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang does not explicitly teach:
wherein the land vehicle processor is further configured to execute a neural network configured to generate coordinated planning results as output; however, Guney does teach:
wherein the land vehicle processor is further configured to execute a neural network configured to generate coordinated planning results as output (Guney: ¶ 025; "learning-based module 126 uses this data, and applies learning-based algorithms to predict the changes in the traffic in a manner that detects whether there is a presence of traffic congestion at some portion of the roadway. The learning-based module 126 can be implemented in accordance with one of several known learning-based techniques, such as machine-learning (ML), artificial intelligence (AI), neural network, deep learning, and the like.") (Guney: ¶ 004; "cooperative traffic congestion detection methods and systems are implemented that enhance the accuracy of detecting traffic congestion for enhanced routing and maneuvering vehicles along a travel route. In an embodiment, a vehicle is configured to receive data from an ad-hoc network of a plurality of vehicles that are communicatively connected (and proximately located). A subset of the plurality of vehicles can be sensor-rich vehicles that are equipped with ranging sensors (e.g., cameras, LIDAR, radar, ultrasonic sensors), which enables real-time detection of the multiple traffic parameters, such as the presence of other vehicles, vehicle speed, and vehicle movement, traffic, and the like, within the vicinity along the route.") (Guney: ¶ 003; "systems can be crucial in solving such problems, and can allow for drivers and/or vehicles to make the right adjustments to make congestion easy to manage and reduce injuries.")
And Wang further teaches: based on UAV planning results and land vehicle planning results. (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.")
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Wang with the improvement of Guney with a reasonable expectation of success because the use of a known technique to improve similar methods in the same way is obvious (KSR Int'l Co. v. Teleflex Inc., 550 U.S. at 417, 82 USPQ2d at 1396.) In the instant case, both Wang's and Guney’s base methods are similar traffic flow improvement systems; however, the combined device would result in more sophisticate traffic routing due to the use of artificial intelligence.
Regarding claim 13, as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang does not explicitly teach:
wherein the land vehicle processor is further configured to execute a neural network configured to generate coordinated planning results as output based on the first ground traffic information, the second ground traffic information; however, Guney does teach:
wherein the land vehicle processor is further configured to execute a neural network configured to generate coordinated planning results as output based on the first ground traffic information, the second ground traffic information, (Guney: ¶ 025; "learning-based module 126 uses this data, and applies learning-based algorithms to predict the changes in the traffic in a manner that detects whether there is a presence of traffic congestion at some portion of the roadway. The learning-based module 126 can be implemented in accordance with one of several known learning-based techniques, such as machine-learning (ML), artificial intelligence (AI), neural network, deep learning, and the like.") (Guney: ¶ 004; "cooperative traffic congestion detection methods and systems are implemented that enhance the accuracy of detecting traffic congestion for enhanced routing and maneuvering vehicles along a travel route. In an embodiment, a vehicle is configured to receive data from an ad-hoc network of a plurality of vehicles that are communicatively connected (and proximately located). A subset of the plurality of vehicles can be sensor-rich vehicles that are equipped with ranging sensors (e.g., cameras, LIDAR, radar, ultrasonic sensors), which enables real-time detection of the multiple traffic parameters, such as the presence of other vehicles, vehicle speed, and vehicle movement, traffic, and the like, within the vicinity along the route.") (Guney: ¶ 003; "systems can be crucial in solving such problems, and can allow for drivers and/or vehicles to make the right adjustments to make congestion easy to manage and reduce injuries.")
And Wang further teaches: UAV planning results, and land vehicle planning results. (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.")
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Wang with the improvement of Guney with a reasonable expectation of success because the use of a known technique to improve similar methods in the same way is obvious (KSR Int'l Co. v. Teleflex Inc., 550 U.S. at 417, 82 USPQ2d at 1396.) In the instant case, both Wang's and Guney’s base methods are similar traffic flow improvement systems; however, the combined device would result in more sophisticate traffic routing due to the use of artificial intelligence.
Regarding claim 14 as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang does not explicitly teach:
wherein the land vehicle processor is further configured to execute a neural network configured to generate coordinated planning results as output; however, Guney does teach:
wherein the land vehicle processor is further configured to execute a neural network configured to generate coordinated planning results as output (Guney: ¶ 025; "learning-based module 126 uses this data, and applies learning-based algorithms to predict the changes in the traffic in a manner that detects whether there is a presence of traffic congestion at some portion of the roadway. The learning-based module 126 can be implemented in accordance with one of several known learning-based techniques, such as machine-learning (ML), artificial intelligence (AI), neural network, deep learning, and the like.") (Guney: ¶ 004; "cooperative traffic congestion detection methods and systems are implemented that enhance the accuracy of detecting traffic congestion for enhanced routing and maneuvering vehicles along a travel route. In an embodiment, a vehicle is configured to receive data from an ad-hoc network of a plurality of vehicles that are communicatively connected (and proximately located). A subset of the plurality of vehicles can be sensor-rich vehicles that are equipped with ranging sensors (e.g., cameras, LIDAR, radar, ultrasonic sensors), which enables real-time detection of the multiple traffic parameters, such as the presence of other vehicles, vehicle speed, and vehicle movement, traffic, and the like, within the vicinity along the route.") (Guney: ¶ 003; "systems can be crucial in solving such problems, and can allow for drivers and/or vehicles to make the right adjustments to make congestion easy to manage and reduce injuries.")
And Wang further teaches: based on the first ground traffic information and the second ground traffic information. (Wang: ¶ 092; "wherein the first real-time map is based on first scanning data collected using a first scanning sensor coupled to the ground vehicle, can include transmitting the second real-time map to the ground vehicle, the ground vehicle configured to convert coordinates in the second real-time map to a coordinate system to match the first real-time map, determine an overlapping portion of the first real-time map and the second real-time map in the coordinate system, and transmit with the overlapping portion to the aerial vehicle.")
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Wang with the improvement of Guney with a reasonable expectation of success because the use of a known technique to improve similar methods in the same way is obvious (KSR Int'l Co. v. Teleflex Inc., 550 U.S. at 417, 82 USPQ2d at 1396.) In the instant case, both Wang's and Guney’s base methods are similar traffic flow improvement systems; however, the combined device would result in more sophisticate traffic routing due to the use of artificial intelligence.
Regarding claim 15 as detailed above, combination Wang teaches the invention as detailed with respect to claim 1. Wang does not explicitly teach:
wherein the land vehicle processor is further configured to execute a neural network configured to generate coordinated planning results as output based on the first