Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim(s) 1-7, 9-12, 14-17, and 19-23 are pending for examination.
This Action is made FINAL.
Response to Arguments
With regards to claim(s) 1-3, 12, 15, and 21-23 previously rejected under 35 U.S.C. 102 and claim(s) 5-6, 14, and 17previously rejected under 35 U.S.C. 103, applicant's arguments have been fully considered, but are deemed moot in view of new grounds of rejection necessitated by Applicant's amendment.
Applicant presents no new arguments with regard to the dependent claims. Applicant’s arguments directed toward the pending dependent claims are that due to their dependency on what the applicant believes are allowable independent claims, the pending dependent claims should also be allowable. However as previously stated, examiner has not found applicant’s arguments directed towards the independent claims persuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 12, 15, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. (US 9612123 B1, hereinafter known as Levinson) in view of Lee (US 20180217233 A1) and Lee et al. (US 20180268220 A1, hereinafter known as Lee2).
Levinson was cited in a previous office action.
Regarding claim 1, Levinson teaches A computer-implemented method for augmenting capabilities of an Automated Driving System (ADS) of a vehicle, the method comprising:
processing, by means of using a local perception module of the ADS, sensor data obtained from one or more sensors of the vehicle in order to generate a local world-view of the ADS comprising a first representation of real-time scene understanding, wherein the sensor data comprises information about a surrounding environment of the vehicle;
{Column 45 “Diagram 3900 depicts an autonomous vehicle 3930, which includes autonomous vehicle controller 3947, a local map generator 3940, and a reference data repository 3905. Diagram 3900 also depicts an autonomous vehicle service platform 3901 including a mapping engine 3954 and a teleoperator computing device 3904. Reference data repository 3905 includes a map store 3905a configured to store three-dimensional map data 3943 and an route data store 3905b, which may be a data repository for storing route data (e.g., with or without an indication that a portion of a route data, or road network data, is associated with changed road network data or updated road network data).
Local map generator 3940 may be configured to receive multiple amounts and types of sensor data, such as sensor data from sensor types 3902a, 3902b, and 3902c. According to various examples, local map generator 3940 may be configured to generate map data (e.g., three-dimensional map data) locally in real-time (or nearly in real-time) based on sensor data from sensor types 3902a, 3902b, and 3902c (e.g., from groups of Lidar sensors, groups of cameras, groups of radars, etc.). Local map generator 3940 may implement logic configured to perform simultaneous localization and mapping (“SLAM”) or any suitable mapping technique. In at least some examples, local map generator 3940 may implement “online” map generation techniques in which one or more portions of raw sensor data from sensor types 3902a to 3902c may be received in real-time (or nearly real-time) to generate map data (or identify changes thereto) with which to navigate autonomous vehicle 3930. Local map generator 3940 may also implement a distance transform, such as signed distance function (“SDF”), to determine surfaces external to an autonomous vehicle. In one example, a truncated sign distance function (“TSDF”), or equivalent, may be implemented to identify one or more points on a surface relative to a reference point (e.g., one or more distances to points on a surface of an external object), whereby the TSDF function may be used to fuse sensor data and surface data to form three-dimensional local map data 3941.”
}
transmitting sensor data comprising information about the surrounding environment of the vehicle to a remote system;
{Column 37 “FIG. 36 is a diagram depicting a mapping engine configured to generate mapping data adaptively for autonomous vehicles responsive to changes in physical environments, according to some examples. Diagram 3600 depicts a mapping engine 3654 disposed in an autonomous vehicle service platform 3601 communicatively coupled via a communication layer (not shown) to one or more autonomous vehicles 3630. Mapping engine 3654 is configured to generate map data, and to modify map data adaptively responsive to changes in physical environments in which autonomous vehicle 3630 transits. In the example shown, mapping engine 3654 may generate mapping data based on sensor data received from autonomous vehicle 3630, which is depicted as having any number of sensors or sensor devices 3604a, 3604b, and 3604c, of a sensor type 3602a, a sensor type 3602b, and a sensor type 3602c, respectively. Autonomous vehicle 3630 may include any number of other sensors or sensor devices 3604n having any other sensor types 3602n. Sensors 3604a, 3604b, 3604c, and 3604n respectively generate sensor data 3607a, 3607b, 3607c, and 3607n, one or more of which may be received into mapping engine 3654 for generating map data 3659 (e.g., 2D, 3D, and/or 4D map data). Map data 3659 may be transmitted to autonomous vehicle 3630 for storage in map repository 3605a and for use to facilitate localization as well as other functionalities. In particular, autonomous vehicle 3630 may include a localizer (not shown) that uses map data in map repository 3605a to determine a location and/or local pose of the autonomous vehicle at any time, including during transit.”
}
receiving off-board processed data from the remote system, the off-board processed data being indicative of a supplementary world-view of the ADS, wherein the supplementary world-view comprises a second representation of real-time scene understanding including at least one characteristic of the surrounding environment determined by the remote system,
{Column 37 “FIG. 36 is a diagram depicting a mapping engine configured to generate mapping data adaptively for autonomous vehicles responsive to changes in physical environments, according to some examples. Diagram 3600 depicts a mapping engine 3654 disposed in an autonomous vehicle service platform 3601 communicatively coupled via a communication layer (not shown) to one or more autonomous vehicles 3630. Mapping engine 3654 is configured to generate map data, and to modify map data adaptively responsive to changes in physical environments in which autonomous vehicle 3630 transits. In the example shown, mapping engine 3654 may generate mapping data based on sensor data received from autonomous vehicle 3630, which is depicted as having any number of sensors or sensor devices 3604a, 3604b, and 3604c, of a sensor type 3602a, a sensor type 3602b, and a sensor type 3602c, respectively. Autonomous vehicle 3630 may include any number of other sensors or sensor devices 3604n having any other sensor types 3602n. Sensors 3604a, 3604b, 3604c, and 3604n respectively generate sensor data 3607a, 3607b, 3607c, and 3607n, one or more of which may be received into mapping engine 3654 for generating map data 3659 (e.g., 2D, 3D, and/or 4D map data). Map data 3659 may be transmitted to autonomous vehicle 3630 for storage in map repository 3605a and for use to facilitate localization as well as other functionalities. In particular, autonomous vehicle 3630 may include a localizer (not shown) that uses map data in map repository 3605a to determine a location and/or local pose of the autonomous vehicle at any time, including during transit.”
Column 38 “In some cases, map data 3659 generated by mapping engine 3654 may be used in combination with locally-generated map data (not shown), as generated by a local map generator (not shown) in autonomous vehicle 3630. For example, an autonomous vehicle controller (not shown) may detect that one or more portions of map data in map repository 3605a varies from one or more portions of locally-generated map data. Logic in the autonomous vehicle controller can analyze the differences in map data (e.g., variation data) to identify a change in the physical environment (e.g., the addition, removal, or change in a static object). In a number of examples, the term “variation data” may refer to the differences between remotely-generated and locally-generated map data. Based on the changed portion of an environment, the autonomous vehicle controller may implement varying proportional amounts of map data in map repository 3605a and locally-generated map data to optimize localization. For example, an autonomous vehicle controller may generate hybrid map data composed of both remotely-generated map data and locally-generated map data to optimize the determination of the location or local pose of autonomous vehicle 3630. Further, an autonomous vehicle controller, upon detecting variation data, may cause transmission (at various bandwidths or data rates) of varying amounts of sensor-based data or other data to autonomous vehicle service platform 3601. For example, autonomous vehicle service platform 3601 may receive different types of data at different data rates based on, for instance, the criticality of receiving guidance from a teleoperator. As another example, subsets of sensor data 3607a, 3607b, 3607c, and 3607n may be transmitted (e.g., at appropriate data rates) to, for example, modify map data to form various degrees of updated map data in real-time (or near real-time), and to further perform one or more of the following: (1) evaluate and characterize differences in map data, (2) propagate updated portions of map data to other autonomous vehicles in the fleet, (3) generate a notification responsive to detecting map data differences to a tele-operator computing device, (4) generate a depiction of the environment (and the changed portion thereof), as sensed by various sensor devices 3604a, 3604b, 3604c, and 3604n, to display at any sufficient resolution in a user interface of a teleoperator computing device. Note that the above-described examples are not limiting, and any other map-related functionality for managing a fleet of autonomous vehicles may be implemented using mapping engine 3654 in view of detected changes in physical environments relative to map data.”
}
forming an augmented world-view of the ADS based on the generated local world-view and the supplementary world-view such that the augmented world-view incorporates the at least one characteristic determined by the remote system to provide an enhanced representation of real-time scene understanding;
{Column 38 “In some cases, map data 3659 generated by mapping engine 3654 may be used in combination with locally-generated map data (not shown), as generated by a local map generator (not shown) in autonomous vehicle 3630. For example, an autonomous vehicle controller (not shown) may detect that one or more portions of map data in map repository 3605a varies from one or more portions of locally-generated map data. Logic in the autonomous vehicle controller can analyze the differences in map data (e.g., variation data) to identify a change in the physical environment (e.g., the addition, removal, or change in a static object). In a number of examples, the term “variation data” may refer to the differences between remotely-generated and locally-generated map data. Based on the changed portion of an environment, the autonomous vehicle controller may implement varying proportional amounts of map data in map repository 3605a and locally-generated map data to optimize localization. For example, an autonomous vehicle controller may generate hybrid map data composed of both remotely-generated map data and locally-generated map data to optimize the determination of the location or local pose of autonomous vehicle 3630. Further, an autonomous vehicle controller, upon detecting variation data, may cause transmission (at various bandwidths or data rates) of varying amounts of sensor-based data or other data to autonomous vehicle service platform 3601. For example, autonomous vehicle service platform 3601 may receive different types of data at different data rates based on, for instance, the criticality of receiving guidance from a teleoperator. As another example, subsets of sensor data 3607a, 3607b, 3607c, and 3607n may be transmitted (e.g., at appropriate data rates) to, for example, modify map data to form various degrees of updated map data in real-time (or near real-time), and to further perform one or more of the following: (1) evaluate and characterize differences in map data, (2) propagate updated portions of map data to other autonomous vehicles in the fleet, (3) generate a notification responsive to detecting map data differences to a tele-operator computing device, (4) generate a depiction of the environment (and the changed portion thereof), as sensed by various sensor devices 3604a, 3604b, 3604c, and 3604n, to display at any sufficient resolution in a user interface of a teleoperator computing device. Note that the above-described examples are not limiting, and any other map-related functionality for managing a fleet of autonomous vehicles may be implemented using mapping engine 3654 in view of detected changes in physical environments relative to map data.”
}
generating, at an output, a signal indicative of the augmented world-view of the ADS that enables the ADS to make real-time control decisions for the vehicle.
{Column 21 “In another state of operation (e.g., a normative state), static map data 1301, current and predicted object state data 1303, local pose data 1305, and plan data 1307 (e.g., global plan data) are received into trajectory calculator 1325, which is configured to calculate (e.g., iteratively) trajectories to determine an optimal one or more paths. Next, at least one path is selected and is transmitted as selected path data 1311. According to some embodiments, trajectory calculator 1325 is configured to implement re-planning of trajectories as an example. Nominal driving trajectory generator 1327 is configured to generate trajectories in a refined approach, such as by generating trajectories based on receding horizon control techniques. Nominal driving trajectory generator 1327 subsequently may transmit nominal driving trajectory path data 1372 to, for example, a trajectory tracker or a vehicle controller to implement physical changes in steering, acceleration, and other components.”
Column 7 “In view of the foregoing, the structures and/or functionalities of autonomous vehicle 130 and/or autonomous vehicle controller 147, as well as their components, can perform real-time (or near real-time) trajectory calculations through autonomous-related operations, such as localization and perception, to enable autonomous vehicles 109 to self-drive.”
}
Levinson does not teach, the at least one characteristic comprising at least one of: a classification type for an object not determinable by the local perception module, or a confidence level for an environmental characteristic higher than achievable by the local perception module;
However, Lee teaches a second representation of real-time scene understanding including at least one characteristic of the surrounding environment determined by the remote system, the at least one characteristic comprising at least one of: a classification type for an object where classification is improved improved confidence level for an environmental characteristic
{Para [0024] “Accordingly, in one embodiment, the system 170 applies machine learning/deep learning algorithm(s) to the partial data to produce an observation model that embodies the relationships between the partial data and the objects of the observational data. In this way, the observation model is used to improve identification/recognition and tracking of objects when obscured data is available. As an additional note, while the system 170 is illustrated as being fully embodied/implemented within the vehicle 100, in one embodiment, one or more functional aspects of the system 170 are implemented within one or more servers that are remote from the vehicle 100. For example, the discussed functionality of the system 170, in one embodiment, is implemented as a cloud-based service such as a Software as a Service (SaaS). Moreover, the system 170 may be distributed among a plurality of remote servers that perform processing to achieve the noted functions. The noted functions and methods will become more apparent with a further discussion of the figures.”
Para [0045] “At 430, the estimating module 230 determines whether the reconstructed observed object satisfies a threshold. In one embodiment, the estimating module 230 assesses the reconstructed observed object to determine how well the reconstructed object conforms with particular criteria. For example, the criteria can indicate object classes (e.g., vehicle, person, etc.) and attributes of those different classes. Thus, the estimating module 230 can undertake an analysis at 430 to determine how closely the reconstructed object conforms to the known classes, or, in one embodiment, a particular object within a class. The estimating module 230, in one embodiment, produces a score that is, for example, a probability that the reconstructed object conforms to the known class and/or particular object within the class. Accordingly, at 430, the estimating module 230 determines whether the provided probabilities/score satisfy a threshold (e.g., within a specified confidence interval such as 85% or greater). When the estimating module 230 determines that the score satisfies the threshold, the reconstructed object is considered to be, for example, highly complete and thus is a close approximation of a whole body of the observed object. Consequently, the estimating module 230 can then proceed to block 450 where a determination is provided as output (e.g., the reconstructed model is provided as a three-dimensional point cloud).”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Levinson to incorporate the teachings of Lee to use a remote server to perform object classification and improve confidence because it reduces the processing requirements of the vehicle perception system potentially reducing cost and complexity of the vehicle.
Levinson in view of Lee does not teach, a classification type for an object not determinable by the local perception module, or a confidence level for an environmental characteristic higher than achievable by the local perception module;
However, Lee2 teaches a classification type for an object not determinable by the local perception module, or a confidence level for an environmental characteristic higher than achievable by the local perception module;
{Para [0115] “In some implementations, the local processing and data module 1224 and/or the remote processing module 1228 are programmed to perform embodiments of RoomNet to determine room layout. For example, the local processing and data module 1224 and/or the remote processing module 1228 can be programmed to perform embodiments of the process 1100 described with reference to FIG. 11. The local processing and data module 1224 and/or the remote processing module 1228 can be programmed to perform the room layout estimation method 1100 disclosed herein. The image capture device can capture video for a particular application (e.g., augmented reality (AR) or mixed reality (MR), human-computer interaction (HCl), autonomous vehicles, drones, or robotics in general). The video (or one or more frames from the video) can be analyzed using an embodiment of the computational RoomNet architecture by one or both of the processing modules 1224, 1228. In some cases, off-loading at least some of the RoomNet analysis to a remote processing module (e.g., in the “cloud”) may improve efficiency or speed of the computations. The parameters of the RoomNet neural network (e.g., weights, bias terms, subsampling factors for pooling layers, number and size of kernels in different layers, number of feature maps, room layout types, keypoint heat maps, etc.) can be stored in data modules 1224 and/or 1232.”
It is well known in the art that increasing processing speed increases the amount of data that can be processed and it is also well known in the art that more data input for processing (especially with machine learning models) generally results in increased accuracy.
Lee already teaches that the remote server is performing classification.
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Levinson to incorporate the teachings of Lee to use a remote server to perform object classification and improve confidence because it reduces the processing requirements of the vehicle perception system potentially reducing cost and complexity of the vehicle.
Regarding Claim 2, Levinson in view of Lee and Lee2 teaches The method according to claim 1. Levinson further teaches wherein the transmitted sensor data is a subset of the sensor data obtained from the one or more sensor sensors of the vehicle such that the supplementary world-view is based on a subset of the sensor data used for the local processing.
{Column 37 “FIG. 36 is a diagram depicting a mapping engine configured to generate mapping data adaptively for autonomous vehicles responsive to changes in physical environments, according to some examples. Diagram 3600 depicts a mapping engine 3654 disposed in an autonomous vehicle service platform 3601 communicatively coupled via a communication layer (not shown) to one or more autonomous vehicles 3630. Mapping engine 3654 is configured to generate map data, and to modify map data adaptively responsive to changes in physical environments in which autonomous vehicle 3630 transits. In the example shown, mapping engine 3654 may generate mapping data based on sensor data received from autonomous vehicle 3630, which is depicted as having any number of sensors or sensor devices 3604a, 3604b, and 3604c, of a sensor type 3602a, a sensor type 3602b, and a sensor type 3602c, respectively. Autonomous vehicle 3630 may include any number of other sensors or sensor devices 3604n having any other sensor types 3602n. Sensors 3604a, 3604b, 3604c, and 3604n respectively generate sensor data 3607a, 3607b, 3607c, and 3607n, one or more of which may be received into mapping engine 3654 for generating map data 3659 (e.g., 2D, 3D, and/or 4D map data). Map data 3659 may be transmitted to autonomous vehicle 3630 for storage in map repository 3605a and for use to facilitate localization as well as other functionalities. In particular, autonomous vehicle 3630 may include a localizer (not shown) that uses map data in map repository 3605a to determine a location and/or local pose of the autonomous vehicle at any time, including during transit.”
}
Regarding Claim 3, Levinson in view of Lee and Lee2 teaches The method according to claim 2. Levinson further teaches wherein the sensor data used for the local processing comprises a first data stream from the one or more sensors of the vehicle, the first data stream having a first sample rate; and wherein the transmitted sensor data comprises a second data stream from the one or more sensors, the second data stream having a second sample rate lower than the first sample rate.
{Column 16 “FIG. 7 is a diagram depicting an example of an autonomous vehicle service platform implementing redundant communication channels to maintain reliable communications with a fleet of autonomous vehicles, according to some embodiments. Diagram 700 depicts an autonomous vehicle service platform 701 including a reference data generator 705, a vehicle data controller 702, an autonomous vehicle fleet manager 703, a teleoperator manager 707, a simulator 740, and a policy manager 742. Reference data generator 705 is configured to generate and modify map data and route data (e.g., RNDF data). Further, reference data generator 705 may be configured to access 2D maps in 2D map data repository 720, access 3D maps in 3D map data repository 722, and access route data in route data repository 724. Other map representation data and repositories may be implemented in some examples, such as 4D map data including Epoch Determination. Vehicle data controller 702 may be configured to perform a variety of operations. For example, vehicle data controller 702 may be configured to change a rate that data is exchanged between a fleet of autonomous vehicles and platform 701 based on quality levels of communication over channels 770. During bandwidth-constrained periods, for example, data communications may be prioritized such that teleoperation requests from autonomous vehicle 730 are prioritized highly to ensure delivery. Further, variable levels of data abstraction may be transmitted per vehicle over channels 770, depending on bandwidth available for a particular channel. For example, in the presence of a robust network connection, full Lidar data (e.g., substantially all Lidar data, but also may be less) may be transmitted, whereas in the presence of a degraded or low-speed connection, simpler or more abstract depictions of the data may be transmitted (e.g., bounding boxes with associated metadata, etc.).”
Column 45 “Local map generator 3940 may implement logic configured to perform simultaneous localization and mapping (“SLAM”) or any suitable mapping technique. In at least some examples, local map generator 3940 may implement “online” map generation techniques in which one or more portions of raw sensor data from sensor types 3902a to 3902c may be received in real-time (or nearly real-time) to generate map data (or identify changes thereto) with which to navigate autonomous vehicle 3930. Local map generator 3940 may also implement a distance transform, such as signed distance function (“SDF”), to determine surfaces external to an autonomous vehicle. In one example, a truncated sign distance function (“TSDF”), or equivalent, may be implemented to identify one or more points on a surface relative to a reference point (e.g., one or more distances to points on a surface of an external object), whereby the TSDF function may be used to fuse sensor data and surface data to form three-dimensional local map data 3941.”
}
Regarding Claim 12, Levinson in view of Lee and Lee2 teaches The method according to claim 1. Levinson further teaches wherein forming the augmented world-view comprises: processing, using the local perception module of the ADS, the received off-board processed data in order to augment the local world-view of the ADS; and combining the supplementary world-view with the generated local world-view of the ADS.
{Column 38 “In some cases, map data 3659 generated by mapping engine 3654 may be used in combination with locally-generated map data (not shown), as generated by a local map generator (not shown) in autonomous vehicle 3630. For example, an autonomous vehicle controller (not shown) may detect that one or more portions of map data in map repository 3605a varies from one or more portions of locally-generated map data. Logic in the autonomous vehicle controller can analyze the differences in map data (e.g., variation data) to identify a change in the physical environment (e.g., the addition, removal, or change in a static object). In a number of examples, the term “variation data” may refer to the differences between remotely-generated and locally-generated map data. Based on the changed portion of an environment, the autonomous vehicle controller may implement varying proportional amounts of map data in map repository 3605a and locally-generated map data to optimize localization. For example, an autonomous vehicle controller may generate hybrid map data composed of both remotely-generated map data and locally-generated map data to optimize the determination of the location or local pose of autonomous vehicle 3630. Further, an autonomous vehicle controller, upon detecting variation data, may cause transmission (at various bandwidths or data rates) of varying amounts of sensor-based data or other data to autonomous vehicle service platform 3601. For example, autonomous vehicle service platform 3601 may receive different types of data at different data rates based on, for instance, the criticality of receiving guidance from a teleoperator. As another example, subsets of sensor data 3607a, 3607b, 3607c, and 3607n may be transmitted (e.g., at appropriate data rates) to, for example, modify map data to form various degrees of updated map data in real-time (or near real-time), and to further perform one or more of the following: (1) evaluate and characterize differences in map data, (2) propagate updated portions of map data to other autonomous vehicles in the fleet, (3) generate a notification responsive to detecting map data differences to a tele-operator computing device, (4) generate a depiction of the environment (and the changed portion thereof), as sensed by various sensor devices 3604a, 3604b, 3604c, and 3604n, to display at any sufficient resolution in a user interface of a teleoperator computing device. Note that the above-described examples are not limiting, and any other map-related functionality for managing a fleet of autonomous vehicles may be implemented using mapping engine 3654 in view of detected changes in physical environments relative to map data.”
}
Regarding Claim 15, Levinson in view of Lee and Lee2 teaches The method according to claim 1. Levinson further teaches wherein processing using the local perception module of the ADS, the sensor data from the one or more sensors of the vehicle comprises employing a detection algorithm such that the generated local world-view of the ADS comprises a first set of detected perceivable aspects, wherein the supplementary world-view comprises a second set of detected perceivable aspects different from the first set of detected perceivable aspects, and wherein the augmented world-view of the ADS comprises a combination of the first set of detected perceivable aspects and the second set of detected perceivable aspects.
{Column 38 “In some cases, map data 3659 generated by mapping engine 3654 may be used in combination with locally-generated map data (not shown), as generated by a local map generator (not shown) in autonomous vehicle 3630. For example, an autonomous vehicle controller (not shown) may detect that one or more portions of map data in map repository 3605a varies from one or more portions of locally-generated map data. Logic in the autonomous vehicle controller can analyze the differences in map data (e.g., variation data) to identify a change in the physical environment (e.g., the addition, removal, or change in a static object). In a number of examples, the term “variation data” may refer to the differences between remotely-generated and locally-generated map data. Based on the changed portion of an environment, the autonomous vehicle controller may implement varying proportional amounts of map data in map repository 3605a and locally-generated map data to optimize localization. For example, an autonomous vehicle controller may generate hybrid map data composed of both remotely-generated map data and locally-generated map data to optimize the determination of the location or local pose of autonomous vehicle 3630.”
}
Regarding claim 21, it recites A non-transitory computer-readable storage medium having limitations similar to those of claim 1 and therefore is rejected on the same basis.
Additionally Levinson teaches A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a vehicle control system, the one or more programs comprising instructions
{ Column 34-35 “According to some examples, computing platform 3300 performs specific operations by processor 3304 executing one or more sequences of one or more instructions stored in system memory 3306, and computing platform 3300 can be implemented in a client-server arrangement, peer-to-peer arrangement, or as any mobile computing device, including smart phones and the like. Such instructions or data may be read into system memory 3306 from another computer readable medium, such as storage device 3308. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation. Instructions may be embedded in software or firmware. The term “computer readable medium” refers to any tangible medium that participates in providing instructions to processor 3304 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks and the like. Volatile media includes dynamic memory, such as system memory 3306.”
}
Regarding claim 22, it recites An in-vehicle system having limitations similar to those of claim 1 and therefore is rejected on the same basis.
Additionally Levinson teaches An in-vehicle system for augmenting capabilities of an Automated Driving System(ADS) of a vehicle, the in-vehicle system comprising a control circuitry configured to:
{abstract “Various embodiments relate generally to autonomous vehicles and associated mechanical, electrical and electronic hardware, computer software and systems, and wired and wireless network communications to provide map data for autonomous vehicles. In particular, a method may include accessing subsets of multiple types of sensor data, aligning subsets of sensor data relative to a global coordinate system based on the multiple types of sensor data to form aligned sensor data, and generating datasets of three-dimensional map data. The method further includes detecting a change in data relative to at least two datasets of the three-dimensional map data and applying the change in data to form updated three-dimensional map data. The change in data may be representative of a state change of an environment at which the sensor data is sensed. The state change of the environment may be related to the presence or absences of an object located therein.”
}
Regarding claim 23, it recites A ground vehicle having limitations similar to those of claim 1 and therefore is rejected on the same basis.
Additionally Levinson teaches A ground vehicle comprising:
at least one sensor configured to monitor a surrounding environment of the vehicle; a communication circuitry for transmitting/receiving wireless signals to/from a remote system via a communication network;
{abstract “Various embodiments relate generally to autonomous vehicles and associated mechanical, electrical and electronic hardware, computer software and systems, and wired and wireless network communications to provide map data for autonomous vehicles. In particular, a method may include accessing subsets of multiple types of sensor data, aligning subsets of sensor data relative to a global coordinate system based on the multiple types of sensor data to form aligned sensor data, and generating datasets of three-dimensional map data. The method further includes detecting a change in data relative to at least two datasets of the three-dimensional map data and applying the change in data to form updated three-dimensional map data. The change in data may be representative of a state change of an environment at which the sensor data is sensed. The state change of the environment may be related to the presence or absences of an object located therein.”
}
an in-vehicle system for augmenting capabilities of an Automated Driving System (ADS) of a vehicle, the in-vehicle system comprising a control circuitry configured to:
{Column 34-35 “According to some examples, computing platform 3300 performs specific operations by processor 3304 executing one or more sequences of one or more instructions stored in system memory 3306, and computing platform 3300 can be implemented in a client-server arrangement, peer-to-peer arrangement, or as any mobile computing device, including smart phones and the like. Such instructions or data may be read into system memory 3306 from another computer readable medium, such as storage device 3308. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation. Instructions may be embedded in software or firmware. The term “computer readable medium” refers to any tangible medium that participates in providing instructions to processor 3304 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks and the like. Volatile media includes dynamic memory, such as system memory 3306.”
}
Claim(s) 5-6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. (US 9612123 B1, hereinafter known as Levinson) in view of Lee (US 20180217233 A1), Lee et al. (US 20180268220 A1, hereinafter known as Lee2), and Duan et al. (US 20210331703 A1, hereinafter known as Duan).
Daun was cited in a previous office action
Regarding Claim 5, Levinson in view of Lee and Lee2 teaches The method of claim 1
Levinson in view of Lee and Lee2 does not teach, further comprising: comparing the local world-view of the ADS with the supplementary world-view of the ADS so to determine a confidence level of the local world-view based on the comparison; and generating, at an output, a confidence signal indicative of the determined confidence level of the local world-view of the ADS.
However, Duan teaches further comprising: comparing the local world-view of the ADS with the supplementary world-view of the ADS so to determine a confidence level of the local world-view based on the comparison; and generating, at an output, a confidence signal indicative of the determined confidence level of the local world-view of the ADS.
{Para [0027] “In at least one example, the consistency checking component 122 can receive the estimated map data 112 and the stored map data 120 and the consistency checking component 122 can compare the estimated map data 112 and the stored map data 120 to determine whether the stored map data 120 is consistent with the estimated map data 112 and is therefore reliable. In at least one example, the consistency checking component 122 can utilize one or more consistency “checks” or evaluations to evaluate portions of the environment 104 (e.g., which can be associated with individual, corresponding pixels of the estimated map data 112 and the stored map data 120).”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Levinson in view of Lee and Lee2 to incorporate the teachings of Duan to Check consistency between the two world views because it improves safety para [0010] “As such, it is imperative that stored maps are reliable so that autonomous vehicles can make decisions while traversing an environment to ensure safety for passengers and surrounding persons and objects. Techniques described herein avail methods, apparatuses, and systems to enable determining whether a stored map of an environment is reliable, by comparing the stored map to an estimated map generated based at least in part on sensor data received in near real-time from sensor component(s) onboard an autonomous vehicle.”
Regarding Claim 6, Levinson in view of in view of Lee, Lee2, and Duan teaches The method of claim 5
Duan teaches comparing the determined confidence level of the local world-view with a confidence level threshold; and
if the determined confidence level is below the confidence level threshold:
generating a signal indicative of an action to be executed by a control module of the ADS, the action being one of a hand-over request, a dynamic driving task fall-back, and an increase of safety margins of at least one ADS feature.
{Para [0029] “In at least one example, the consistency output 124 can be provided to one or more down-stream components of the vehicle 102 for making decisions on how to traverse the environment 104. That is, the consistency output 124 can be monitored and the vehicle 102 (e.g., computing device(s) associated therewith) can use the consistency output 124 to determine how to control the vehicle 102. In at least one example, if the consistency output 124 indicates that the stored map data 120 is not consistent with the estimated map data 112, the computing device(s) associated with the vehicle 102 can cause the vehicle 102 to decelerate and/or stop. In some examples, the vehicle 102 can decelerate and travel at a velocity below a threshold until the inconsistency is resolved (e.g., confidence score(s) meet or exceed respective threshold(s)). Furthermore, in at least one example, if the consistency output 124 indicates that the stored map data 120 is not consistent with the estimated map data 112, the computing device(s) associated with the vehicle 102 can determine to use the estimated map instead of the stored map, at least until the inconsistency is resolved. In an additional or alternative example, the vehicle 102 can alter a planned trajectory to include regions of high consistency (e.g., confidence score(s) that meet or exceed respective threshold(s)) and avoid regions of low consistency (e.g., confidence score(s) below respective threshold(s)). That is, the confidence score(s) can be input into a planner component, described below, for use in determining and/or modifying a trajectory along which the vehicle 102 can travel.”
Para [0030] “FIG. 2 illustrates an example of the estimated map data 112 and the stored map data 120, as described herein. Both the estimated map data 112 and the stored map data 120 are top-down representations of the environment 104 surrounding a vehicle 102. As illustrated, both the estimated map data 112 and the stored map data 120 can include labels (e.g., masks) that indicate portions of the environment 104 that are associated with an “off-road” indicator 200, indicating that the corresponding portion of the environment 104 is associated with a surface that is not drivable, and an “on-road” indicator 202, indicating that the corresponding portion of the environment 104 is associated with a drivable surface. Furthermore, lane lines, marking driving lanes are also depicted. For example, lane line 206 is shown in both the estimated map data 112 and the stored map data 120. As illustrated, when compared, the estimated map data 112 and the stored map data 120 are inconsistent. That is, the lane lines in the estimated map data 112 are displaced from the dashed lines that represent the lane lines in the stored map data 120. This could result, for example, from the lane lines having been repainted at some time after the stored map was updated. In some examples, the displacement—or difference—can meet or exceed a threshold and in such an example, the consistency output 124 can indicate an inconsistency. As such, one or more down-stream components can make decisions on how to traverse the environment 104 based at least in part on the determined inconsistency.”
}
Regarding Claim 14, Levinson in view of Lee and Lee2 teaches The method of claim 1
Levinson in view of Lee and Lee2 does not teach, generating, at an output, a worldview-feedback signal for transmission to the remote system, wherein the world-view feedback signal is indicative of a level of incorporation of the off-board processed data in the augmented world-view.
However, Duan teaches generating, at an output, a worldview-feedback signal for transmission to the remote system, wherein the world-view feedback signal is indicative of a level of incorporation of the off-board processed data in the augmented world-view.
{Para [0027] “In some examples, a consistency output indicating an inconsistency, can trigger remapping, additional data collection, or the like by other vehicles in a fleet of vehicles. Further, in some examples, a consistency output indicating an inconsistency can cause the vehicle 302 to move closer to inconsistent portions of the stored map to obtain a new reading (e.g., in an effort to determine if the output is the same or different).”
The request for remapping can be considered as a result of inconsistency can be considered as a worldview-feedback signal.
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Levinson in view of Lee and Lee2 to incorporate the teachings of Duan to Check consistency between the two world views because it improves safety para [0010] “As such, it is imperative that stored maps are reliable so that autonomous vehicles can make decisions while traversing an environment to ensure safety for passengers and surrounding persons and objects. Techniques described herein avail methods, apparatuses, and systems to enable determining whether a stored map of an environment is reliable, by comparing the stored map to an estimated map generated based at least in part on sensor data received in near real-time from sensor component(s) onboard an autonomous vehicle.”
Claim(s) 17 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. (US 9612123 B1, hereinafter known as Levinson) in view of Lee (US 20180217233 A1), Lee et al. (US 20180268220 A1, hereinafter known as Lee2), and Staehlin (US 20200010083 A1).
Staehlin was cited in a previous office action
Regarding Claim 17, Levinson in view of Lee and Lee2 teaches The method of claim 1. Levinson further teaches further comprising:
locally generating a candidate path based on the generated augmented world-view of the ADS;
selecting one candidate path for execution by the ADS based on the locally generated candidate path,
generating, at an output, a path signal indicative of the selected candidate path.
{Column 37 “Planner 364 is configured to receive perception data from perception engine 366, and may also include localizer data from localizer 368. According to some examples, the perception data may include an obstacle map specifying static and dynamic objects located in the vicinity of an autonomous vehicle, whereas the localizer data may include a local pose or position. In operation, planner 364 generates numerous trajectories, and evaluates the trajectories, based on at least the location of the autonomous vehicle against relative locations of external dynamic and static objects. Planner 364 selects an optimal trajectory based on a variety of criteria over which to direct the autonomous vehicle in way that provides for collision-free travel. In some examples, planner 364 may be configured to calculate the trajectories as probabilistically-determined trajectories. Further, planner 364 may transmit steering and propulsion commands (as well as decelerating or braking commands) to motion controller 362. Motion controller 362 subsequently may convert any of the commands, such as a steering command, a throttle or propulsion command, and a braking command, into control signals (e.g., for application to actuators or other mechanical interfaces) to implement changes in steering or wheel angles 351 and/or velocity 353.”
}
Levinson in view of Lee and Lee2 does not teach, receiving, from the remote system, a remotely generated candidate path, wherein the remotely generated path is generated by the remote system based on the supplementary world-view of the ADS;
selecting one candidate path for execution by the ADS based on the locally generated candidate path, the remotely generated candidate path, and at least one predefined criterion; and
However, Staehlin teaches receiving, from the remote system, a remotely generated candidate path, wherein the remotely generated path is generated by the remote system based on the supplementary world-view of the ADS;
selecting one candidate path for execution by the ADS based on the locally generated candidate path, the remotely generated candidate path, and at least one predefined criterion; and
{Para [0009] “In other words, it is a good idea to ask the server in good time prior to those situations in which the environment sensors can no longer supply all of the necessary sensor data, whether an offboard trajectory is provided or available for these areas. Said offboard trajectory can subsequently be sent or respectively transmitted by the server to the communication device of the driver assistance system of the vehicle. The control unit can use this offboard trajectory and control the vehicle according to said offboard trajectory, even without sensor data of the environment sensors. The environment model or respectively the environment sensors then supply information regarding “crash objects”, for example a vehicle driving ahead, but no longer the sensor data or respectively the information for the trajectory or respectively the path planning of the vehicle.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Levinson in view of Lee and Lee2 to incorporate the teachings of Duan to receive a trajectory from a server because it improves safety para [0010] “The control unit of the driver assistance system can additionally be designed to always request offboard trajectories from the server so that the latter can, if required, replace the trajectory calculated by the control unit. Consequently, the probability of failure for the driver assistance system can be minimized, since a suitable offboard trajectory is always available and present. Consequently, unforeseeable problems or difficulties during the calculation of the trajectory can be handled, and the short-term failure of environment sensors can be compensated for.”
Allowable Subject Matter
Claims 4, 7, 9-11, 16, and 19-20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER MATTA whose telephone number is (571)272-4296. The examiner can normally be reached Mon - Fri 10:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Lee can be reached at (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.G.M./Examiner, Art Unit 3668
/JAMES J LEE/Supervisory Patent Examiner, Art Unit 3668