DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This action is responsive to the following communication: Original claims filed 12/15/2022. This action is made non-final.
3. Claims 1-20 are pending in the case. Claims 1, 8 and 14 are independent claims.
Claim Objections
4. Claims 4-7, 10-13 and 17-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-3, 8-9 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Farabet (US 20190303759) in view of Panjwani (US 20210166105).
Regarding claim 1, Farabet discloses a method of improving robustness of a deep neural network (DNN), the method comprising:
applying a coverage metric to a trained DNN based on a test set to determine test set adequacy (key performance indicators (KPIs) and/or metrics may be computed for one or more of the current DNNs (e.g., the best performing DNNs) in the model store 306 in order to determine conditions or combination of conditions which the current DNNs may not perform sufficiently well. For example, one condition dimension may include properties of the whole image or frame, such as, but not limited to, lighting or illumination (e.g., day, night, cloudy, twilight, backlit, etc.), weather (e.g., clear, rain, snow, fog, etc.), setting (e.g., rural, urban, suburban, highway, etc.), topography (e.g., flat, curve, hill, etc.), geographic region (e.g., Europe, North America, China, etc.), sensor (e.g., camera) properties such as position and/or lens type, and/or a combination thereof. The conditions or a combination of the conditions which the current DNNs are not considered to perform sufficiently well on (e.g., have an accuracy below a desired or required level) may be used to direct mining and labeling of data (e.g., additional data) that may increase the accuracy of the DNNs with reference to the conditions or combination of conditions. In some examples, the mining of the data may be facilitated by use tags that may have been added during data indexing and/or curation 124 (FIG. 1, paragraph 0046);
monitoring a performance of the trained DNN (Once the DNNs are trained, the DNNs may be used in simulation and/or re-simulation applications 304. The models may then be pruned, optimized, refined, and then deployed as deployed DNNs in the vehicle(s) 102 (e.g., in the software stack(s) 116). Once the DNNs have been trained to an acceptable level of accuracy (e.g., 90%, 95%, 97%, etc.), the training and refinement process may move to a second workflow, such as workflow 300B, paragraph 0043);
based on the performance, applying new data to the trained DNN (The pre-trained models may be used to score new data, and the score may be used to prioritize which data to label. For example, a pre-trained DNN may be used to compute a score for each new frame selected where the score may represent a confidence in the prediction of the DNN, paragraph 0044);
based on the applied new data to identify a subset of the applied new data in response to determining whether new features are generated (the process 118 may include a training loop, whereby new data is generated by the vehicle(s) 102, used to train, test, verify, and/or validate one or more perception DNNs, and the trained or deployed DNNs are then used by the vehicle(s) 102 to navigate real-world environments, paragraph 0034); and
identifying the subset of the applied new data (the pre-trained models may be used to score new data, and the score may be used to prioritize which data to label. For example, a pre-trained DNN may be used to compute a score for each new frame selected where the score may represent a confidence in the prediction of the DNN. When the confidence score is high (e.g., meaning the model is able to accurately handle the frame), the frame may be deprioritized for labeling. When the score is low, the frame may be prioritized. As such, when a frame is confusing for the DNN (i.e., when the confidence score is low), then the frame may be labeled so that the DNN can learn from the frame, thereby further refining the pre-trained DNN, paragraph 0044).
Farabet does not disclose applying a novelty metric to an output of the trained DNN.
However, Panjwani discloses wherein as the novelty metric based data filtering is repeated, the neural network model should be re-trained by combining the training data with the new set of data samples with the highest novelty metric found in the last iteration. In addition, only the retrained model is used to find the additional batch of new samples with the highest novelty metric. Further, the iterative process ensures that the next batch of the extracted data samples only contains the incremental novelty that is still missing in the training data (see paragraph 0065).
The combination of Farabet and Panjwani would have resulted in the DNN teachings of Farabet to further include the novelty filter techniques as described in Panjwani. It would have been obvious to have combined the teachings because a user in Farabet is already involved in utilizing and iterating DNN models and using a novelty filter would have allowed for any given DNN to be further isolated. As such, the combination of references would have been obvious to combine as the resulting invention would have resulted in a predictable invention.
Regarding claims 8 and 14, the subject matter of the claims is substantially similar to claim 1 and as such the same rationale of rejection applies.
Regarding claim 2, Farabet discloses wherein the identified subset of the applied new data is removed from a training set to be applied to train the DNN in response to the identified subset not generating a predetermined amount of the new features (FIG. 3A includes a workflow 300A. The workflow 300A may include data ingestion 122, passing of the data to dataset store(s) 302 (e.g., a service that handles immutable datasets for further processing), labeling the data using data labeling services 126, and training DNNs using model training 128. The frames selected for labelling may be randomly selected in some examples. The workflow 300A may include labeling of, for example, 300,000 to 600,000 frames (e.g., frames represented by the data). Once the DNNs are trained, the DNNs may be used in simulation and/or re-simulation applications 304. The models may then be pruned, optimized, refined, and then deployed as deployed DNNs in the vehicle(s) 102 (e.g., in the software stack(s) 116). Once the DNNs have been trained to an acceptable level of accuracy (e.g., 90%, 95%, 97%, etc.), the training and refinement process may move to a second workflow, such as workflow 300B, see paragraph 0043).
Regarding claim 3, Farabet discloses wherein the identified subset of the applied new data is retained in a training set to be applied to train the DNN in response to the identified subset generating a predetermined amount of the new features (FIG. 3A includes a workflow 300A. The workflow 300A may include data ingestion 122, passing of the data to dataset store(s) 302 (e.g., a service that handles immutable datasets for further processing), labeling the data using data labeling services 126, and training DNNs using model training 128. The frames selected for labelling may be randomly selected in some examples. The workflow 300A may include labeling of, for example, 300,000 to 600,000 frames (e.g., frames represented by the data). Once the DNNs are trained, the DNNs may be used in simulation and/or re-simulation applications 304. The models may then be pruned, optimized, refined, and then deployed as deployed DNNs in the vehicle(s) 102 (e.g., in the software stack(s) 116). Once the DNNs have been trained to an acceptable level of accuracy (e.g., 90%, 95%, 97%, etc.), the training and refinement process may move to a second workflow, such as workflow 300B, see paragraph 0043).
Regarding claim 9, the subject matter of the claims is substantially similar to claim 2 and 3 and as such the same rationale of rejection applies.
Regarding claim 15, the subject matter of the claims is substantially similar to claim 2 and as such the same rationale of rejection applies.
Regarding claim 16, the subject matter of the claims is substantially similar to claim 3 and as such the same rationale of rejection applies.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID E CHOI whose telephone number is (571)270-3780. The examiner can normally be reached on M-F: 7-2, 7-10 (PST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bechtold, Michelle T. can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID E CHOI/Primary Examiner, Art Unit 2148