DETAILED ACTION
Claims 1-20 are considered in this office action. Claims 1-20 are pending examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US2019/0072965) in view of Ahuja et al. (US20200326667A1) and herein after will be referred as Zhang and Ahuja respectively.
Regarding Claim 1, Zhang teaches method (Para [0017]: “As described in various example embodiments, a prediction-based system and method for trajectory planning of autonomous vehicles are described herein.”), comprising:
receiving input data comprising (i) one or more first images including static features that identify map data, and (ii) one or more second images including time-dependent features that identify a position and movement of one or more road agents over time (Para [0058] : “the prediction-based trajectory planning module 200, and the trajectory processing module 173 therein, can receive input perception data 210 from one or more of the vehicle sensor subsystems 144, including one or more cameras. The image data from the vehicle sensor subsystems 144 can be processed by an image processing module to identify proximate agents or other objects (e.g., moving vehicles, dynamic objects, or other objects in the proximate vicinity of the vehicle 105).”);
generating, via a trained behavioral model, a plurality of trajectory predictions for a target road agent from among the one or more road agents based upon the input data, each one of the plurality of trajectory predictions comprising a set of positions identified with a respective trajectory of the target road agent to follow over a period of time (Para [0058] : “ The trained trajectory prediction module 175 is provided in an example embodiment to anticipate or predict the likely actions or reactions of the proximate agents to the host vehicle's 105 change in context (e.g., speed, heading, or the like). Thus, the trajectory processing module 173 can provide the first proposed trajectory of the host vehicle 105 in combination with the predicted trajectories of proximate agents produced by the trajectory prediction module 175. The trajectory prediction module 175 can generate the likely trajectories, or a distribution of likely trajectories of proximate agents, which are predicted to result from the context of the host vehicle 105 (e.g., following the first proposed trajectory). These likely or predicted trajectories of proximate agents can be determined based on the machine learning techniques configured from the training scenarios produced from prior real-world human driver behavior model data collections gathered and assimilated into training data using the training data collection system 201 as described above. These likely or predicted trajectories can also be configured or tuned using the configuration data 174. Over the course of collecting data from many human driver behavior model driving scenarios and training machine datasets and rule sets (or neural nets or the like), the likely or predicted trajectories of proximate agents can be determined with a variable level of confidence or probability.”);
generating, via the trained behavioral model, a certainty score identified with one or more of the plurality of trajectory predictions (Para [0058]: “The confidence level or probability value related to a particular predicted trajectory can be retained or associated with the predicted trajectory of each proximate agent detected to be near the host vehicle 105 at a point in time corresponding to the desired execution of the first proposed trajectory. The trajectory prediction module 175 can generate these predicted trajectories and confidence levels for each proximate agent relative to the context of the host vehicle 105. The trajectory prediction module 175 can generate the predicted trajectories and corresponding confidence levels for each proximate agent as an output relative to the context of the host vehicle 105.”);
and executing a vehicle-based function in accordance with one of the plurality of trajectory predictions based upon the certainty score and the uncertainty score (Para [0058]: “If the process of an example embodiment as described above results in predicted trajectories, confidence levels, and related scores that satisfy the pre-defined goals, the corresponding proposed trajectory 220 is provided as an output from the prediction-based trajectory planning module 200 as shown in FIG. 6.”).
Zhang may not expressly teach computing, via the trained behavioral model, an uncertainty score that is indicative of a novelty of features obtained from the input data with respect to previous computations of trajectory predictions performed via the trained behavioral model.
Ahuja teaches computing, via the trained behavioral model, an uncertainty score that is indicative of a novelty of features obtained from the input data with respect to previous computations of trajectory predictions performed via the trained behavioral model (Para [0038]: “Moreover, because it is difficult to comprehend all corner cases in the training stage, it is important to have an automated methodology for data collection that can flag novel data. The aspects described herein facilitate the identification of high epistemic uncertainty estimates for the majority of sensors 102.1-102.N to indicate that the data points are novel and need to be included in the training stage. To do so, if the epistemic uncertainty value exceeds the optimum threshold, the control unit 110 may determine that the data acquired within some time period is novel and does not fit the current trained model used by the architecture 100. In such a case, then the control unit 110 may initiate a re-training sequence. Such a re-training step incorporates this novel data to update the model parameters to handle similar data in the future.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ahuja to include computing, via the trained behavioral model, an uncertainty score that is indicative of a novelty of features obtained from the input data with respect to previous computations of trajectory predictions performed via the trained behavioral model. Doing so would optimize the trajectory prediction and improved autonomous operations as disclosed by Ahuja.
Similarly Claim 11 is rejected on the similar rational.
Regarding Claim 2, Zhang in view of Ahuja teaches the method of claim 1. Zhang also teaches wherein the certainty score comprises a confidence score that is indicative of a likelihood of one or more respective ones of the plurality of trajectory predictions being selected based upon training data used to train the trained behavioral model (Para [0058]).
Similarly Claim 12 is rejected on the similar rational.
Regarding Claim 3, Zhang in view of Ahuja teaches the method of claim 1. Ahuja teaches wherein the certainty score comprises a computed Mahalanobis distance identified with the plurality of trajectory predictions for the target road agent over the period of time (Para [0048] : “Other techniques, in contrast, have adopted a generative approach proposing fitting class-conditional multivariate Gaussian distributions to the pre-trained features of a DNN. The confidence score in accordance with the fitting of class-conditional multivariate Gaussian distributions may be defined as the “Mahalanobis distance” with respect to the closest class conditional distribution. Although this technique accomplishes the evaluation of both out-of-distribution and adversarial samples, it also assumes homoscedastic distributions (i.e. all classes have identical covariance) that is not valid, and leads to sub-optimal performance.”).
Similarly Claim 13 is rejected on the similar rational.
Regarding Claim 4, Zhang in view of Ahuja teaches the method of claim 1. Ahuja teaches wherein the executing the vehicle-based function comprises executing a first vehicle-based function when the uncertainty score is higher than a predetermined threshold value, and executing a second vehicle-based function when the uncertainty score is less than the predetermined threshold value (Para [0041] : “Again, the UE blocks 106.1, 106.2 may calculate uncertainty estimates by sampling the categorical distributions output by each B-DNN 104.1, 104.2 as shown in FIG. 2 using predictive distributions obtained from T Monte Carlo forward passes. This may be calculated, for example, via the UE blocks 106.1, 106.2 sampling the weights from the learned posterior distribution of model parameters implemented via the B-DNNs 104.1-104.N in accordance with Bayes' rule, as further discussed in the Appendix. The uncertainty estimates are thus based upon the weighted average between the two sensors 102.1, 102.2 in this example, which is used to gate the contribution of unreliable sensor data in accordance with the gating function described herein. Again, the uncertainty estimate data associated with each of the sensors 102.1, 102.N may be monitored by the control unit 110 over time to distinguish between aleatoric uncertainty and epistemic uncertainty, and taking specific types of actions when each is detected.”).
Similarly Claim 14 is rejected on the similar rational.
Regarding Claim 5, Zhang in view of Ahuja teaches the method of claim 1. Ahuja teaches wherein the uncertainty score is computed periodically in response to an expiration of a predetermined period of time (Para [0061]: “To provide an illustrative example, the DNN architecture 400 may, at inference for a given sample, carry a forward pass through the DNN architecture 400 as shown in FIG. 4 to generate the features for that sample. Next, at each layer of the DNN architecture 400, the feature vector is reduced by projecting the feature vector onto the first m principal components that were previously learned during the training stage as discussed above with reference to FIG. 3. Finally, to obtain the uncertainty, a log-likelihood score of the PCA-reduced feature vector is computed with respect to the previously-learned feature distribution (i.e. during the training stage). In an aspect, the log-likelihood scores of the features of a particular sample are calculated with respect to the learned distributions 402 and used to derive the uncertainty estimates. The uncertainty estimates enable the discrimination of in-distribution samples (which should have a high likelihood) from Out of Distribution (00D) or adversarial samples (which should have a low likelihood).”).
Similarly Claim 15 is rejected on the similar rational.
Regarding Claim 6, Zhang in view of Ahuja teaches the method of claim 1. Ahuja teaches wherein the trained behavioral model comprises a Bayesian network architecture having a probability density function (PDF), and wherein the uncertainty score is computed based upon the PDF (Para [0014]: “In Section I, aspects are discussed related to the use of an uncertainty aware multimodal Bayesian fusion framework for autonomous vehicle (AV) applications. Bayesian DNNs are used in these aspects to model and estimate the uncertainty in the sensed data for each modality. The importance given to a modality during fusion is then based on its estimated uncertainty, which results in reliable and robust outcomes. In Section II, aspects are discussed related to modeling the outputs of the various layers (deep features) with parametric probability distributions once training is completed.”).
Similarly Claim 16 is rejected on the similar rational.
Regarding Claim 7, Zhang in view of Ahuja teaches the method of claim 1. Zhang teaches wherein the trained behavioral model comprises an ensemble of likelihood models, each one of the likelihood models being configured to output, as a respective certainty score, a respective confidence score for each one of a set of trajectory predictions that include the plurality of trajectory predictions (Para [0058]).
Similarly Claim 17 is rejected on the similar rational.
Regarding Claim 8, Zhang in view of Ahuja teaches the method of claim 1. Zhang teaches wherein the trained behavioral model is configured to generate the plurality of trajectory predictions by selecting, from among the set of trajectory predictions, a predetermined number of trajectory predictions having a highest respective confidence score in the set of trajectory predictions.
Similarly Claim 18 is rejected on the similar rational.
Regarding Claim 9, Zhang in view of Ahuja teaches the method of claim 1. Ahuja teaches wherein the trained behavioral model is configured to generate the plurality of trajectory predictions by selecting, from among the set of trajectory predictions, a number of trajectory predictions based upon a Mahalanobis distance identified with one or more of the set of trajectory predictions (Para [0048]).
Similarly Claim 19 is rejected on the similar rational.
Regarding Claim 10, Zhang in view of Ahuja teaches the method of claim 1. Zhang teaches further comprising: selecting, based upon the uncertainty score, a subset of the plurality of trajectory predictions; and training a further behavioral model using the subset of the plurality of trajectory predictions (Para [0061] : “Referring now to FIG. 9, a flow diagram illustrates an example embodiment of a system and method 1000 for providing prediction-based trajectory planning of autonomous vehicles. The example embodiment can be configured to: receive training data and ground truth data from a training data collection system, the training data including perception data and context data corresponding to human driving behaviors (processing block 1010); perform a training phase for training a trajectory prediction module using the training data (processing block 1020); receive perception data associated with a host vehicle (processing block 1030); and perform an operational phase for extracting host vehicle feature data and proximate vehicle context data from the perception data, using the trained trajectory prediction module to generate predicted trajectories for each of one or more proximate vehicles near the host vehicle, generating a proposed trajectory for the host vehicle, determining if the proposed trajectory for the host vehicle will conflict with any of the predicted trajectories of the proximate vehicles, and modifying the proposed trajectory for the host vehicle until conflicts are eliminated (processing block 1040).”).
Similarly Claim 20 is rejected on the similar rational.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Siebert et al. (US2021/0347383) discloses techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include determining a trajectory of the object, determining an intent of the trajectory, and sending the trajectory and the intent to a vehicle computing system to control an autonomous vehicle. The vehicle computing system may implement a machine learned model to process data such as sensor data and map data. The machine learned model can associate different intentions of an object in an environment with different trajectories. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on object's intentions and trajectories.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDHESH K JHA whose telephone number is (571)272-6218. The examiner can normally be reached M-F:0800-1700.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James J Lee can be reached at 571-270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDHESH K JHA/Primary Examiner, Art Unit 3668