DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-19 of U.S. Application No. 18/190,157 filed on 03/27/2023 were examined. Examiner filed a non-final rejection on 03/25/2025.
Applicant filed remarks and amendments on 06/20/2025. Claims 1, 3-5, 7, 11-14 and 17 have been amended. Claims 2, 6, 8-10, 15-16 and 18-19 have been cancelled. Claims 1, 3-5, 7, 9-14 and 17 are presently pending and presented for examination
Response to Arguments
Regarding the claim rejections under 35 USC 101: Applicant's arguments filed 06/20/2025 have been fully considered and they persuasive. Accordingly, the previously given claim rejections are withdrawn.
Regarding the claim rejections under 35 USC 103: Applicant's arguments filed 06/20/2025 with respect to Farabet et al. (US 11436484 B2) in view of Dev et al. (US 11919530 B2) have been fully considered but they are not persuasive.
Regarding claim 1, The Applicant argues that Farabet, Dev, and Tkachenko, alone or in combination, do not disclose or suggest “wherein determining whether to add the further test scenario data set or discard the further test scenario data set is based on a first degree of correspondence of at least one cluster of the further test scenario data set to the at least one scenario element of the requirements profile and a second degree of correspondence of the at least one cluster of the further test scenario data set to the at least one cluster of the test scenario data sets comprised in the scenarios library,” and “wherein the one or more computing devices are configured to add the further test scenario data set to the scenarios library based on the first degree of correspondence being greater than or equal to a first threshold value and further based on the second degree of correspondence being less than a second threshold value,” as recited in amended claim 1 (Page 9).
However, The Examiner respectfully disagrees, this argument is not persuasive. Farabet discloses a system for testing automated driving functions, including a testing system comprising a scenarios library with test scenario data sets, and Dev teaches the use of threshold values in evaluating scenario data (Dev, Col. 4:6-24, Col. 6:50-67). Specifically, Dev describes a QORE-aware cognitive engine that generates outputs based on input data, including performance metrics, and uses threshold comparisons to validate features (Dev, Col. 4:6-24). Additionally, the combination of Farabet and Dev suggests determining the addition of scenario data based on correspondence to requirements, as Farabet’s system includes a requirements profile (Farabet, Col. 11:436,484), and Dev’s threshold-based evaluation can be applied to assess correspondence degrees (Dev, Col. 6:50-67). Tkachenko further supports this by disclosing clustering of scenario data (Tkachenko, [0022]), which, when combined, teaches the claimed first and second degrees of correspondence.
Regarding claim 4, The Applicant argues that Farabet, Dev, and Tkachenko do not disclose or suggest “wherein the one or more computing devices are further configured to: based on the number of test scenario data sets comprised in the scenarios library being greater than or equal to a third threshold value, determine a number of scenario elements present in respective test scenario data sets comprised in the scenarios library; and based on the number of scenario elements present in the respective test scenario data sets comprised in the scenarios library being less than a number of scenario elements present in the further test scenario data set, add the further test scenario data set to the scenarios library,” as recited in amended claim 4 (Page 14).
However, The Examiner respectfully disagrees, this argument is not persuasive. Dev discloses a system where the complexity of a driving scenario is increased based on performance thresholds and a number of iterations (Dev, Col. 6:50-57), which suggests evaluating the number of scenario elements. Furthermore, Dev’s QORE-aware cognitive engine is configured to adjust scenario data based on performance metrics (Dev, Col. 6:50-67), implying a comparison of scenario elements present in the library versus new data sets. When combined with Farabet’s scenarios library (Farabet, Col. 11:436,484), this teaches determining the number of scenario elements and adding data sets based on a threshold comparison. Tkachenko’s clustering techniques (Tkachenko, [0022]) further support this functionality.
Regarding claim 7, The Applicant argues that Farabet and Dev do not disclose or suggest “wherein the testing system is further configured to: cluster the test scenario data sets comprised in the scenarios library; select a test scenario data set having the scenario element based on the clustering; and carry out the automated driving functions of the motor vehicle based on the selected test scenario data set,” and “based on the number of test scenario data sets comprised in the scenarios library being less than a number of scenario elements present in the further test scenario data set, add the further test scenario data set to the scenarios library,” as recited in amended claim 7 (Page 16).
However, The Examiner respectfully disagrees, this argument is not persuasive. Farabet teaches a testing system with a scenarios library and the selection of test scenario data sets for automated driving functions (Farabet, Col. 11:436,484). Dev provides a QORE-aware cognitive engine that adjusts scenarios based on performance metrics and threshold values (Dev, Col. 6:50-67), suggesting clustering and selection based on scenario elements. The combination implies clustering test scenario data sets and selecting based on requirements, as supported by Tkachenko’s clustering methods (Tkachenko, [0022]). Additionally, Dev’s threshold-based approach to scenario complexity (Dev, Col. 6:50-57) supports adding data sets based on the number of elements, maintaining the rejection of claim 7.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 7, 9-11, 13-14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Farabet et al. (US 11436484 B2) in view of Dev et al. (US 11919530 B2), hereinafter referred to as Farabet and Dev respectively.
Regarding claims 1, 4 and 7, Farabet discloses A system for testing automated driving functions (“Embodiments of the present disclosure relate training, testing, and verifying autonomous machines using simulated environments. Systems and methods are disclosed for training, testing, and/or verifying one or more features of a real-world system—such as a software stack for use in autonomous vehicles and/or robots” [ Col.1 ln 57-62]), the system comprising:
a motor vehicle having one or more sensors configured to obtain real measurement data (“One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments. The sensors of the vehicle(s) 102 may include, without limitation, global navigation satellite systems sensor(s) 1158 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1160, ultrasonic sensor(s) 1162, LIDAR sensor(s) 1164, inertial measurement unit (IMU) sensor(s) 1166 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1196, stereo camera(s) 1168, wide-view camera(s) 1170 (e.g., fisheye cameras), infrared camera(s) 1172, surround camera(s) 1174 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1198, speed sensor(s) 1144 (e.g., for measuring the speed of the vehicle 102), vibration sensor(s) 1142, steering sensor(s) 1140, brake sensor(s) (e.g., as part of the brake sensor system 1146), and/or other sensor types.” [Col.4 ln 5-25]);
and a testing system comprising: a scenarios library comprising a plurality of test scenario data sets and a requirements profile having at least one scenario element for creating or updating the scenarios library, wherein the plurality of test scenario data sets include at least one test scenario data set based on the real measurement data obtained by the one or more sensors of the motor vehicle (“As such, the system 100 may include a re-simulation system that uses physical sensor data generated by vehicle(s) 102 in real-world environments to train, test, verify, and/or validate one or more DNNs for use in the software stack(s) 116. In some examples, as described herein, the re-simulation system 100 may overlap with simulation system(s) 400A, 400B, 400C, and/or 400D in that at least some of the testing, training, verification, and/or validation may be performed within a simulated environment.” [Col.5 ln 52-62]);_
and one or more computing devices configured to: compare a further test scenario data set having a plurality of scenario elements to at least one cluster of the test scenario data sets comprised in the scenarios library and to the requirements profile cluster scenario elements of the test scenario data sets comprised in the scenarios library and the further test scenario data set(“The simulated environment 410 may be generated using virtual data, real-world data, or a combination thereof. For example, the simulated environment may include real-world data augmented or changed using virtual data to generate combined data that may be used to simulate certain scenarios or situations with different and/or added elements (e.g., additional AI objects, environmental features, weather conditions, etc.). For example, pre-recorded video may be augmented or changed to include additional pedestrians, obstacles, and/or the like, such that the virtual objects (e.g., executing the software stack(s) 116 as HIL objects and/or SIL objects) may be tested against variations in the real-world data.” [Col.11 ln 23-35]);
and determine whether to add the further test scenario data set to the scenarios library or discard the further test scenario data set (“or example, the system 100 may be used for training, testing, verifying, deploying, updating, re-verifying, and/or deploying one or more neural networks for use in an autonomous vehicle, a semi-autonomous vehicle, a robot, and/or another object. In some examples, the system 100 may include some or all of the component, features, and/or functionality of system 1176 of FIG. 11D, and/or may include additional and/or alternative components, features, and functionality of the system 1176. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, groupings of functions, etc.) may be used in addition to or instead of those shown, and some elements may be omitted altogether.” [Col.3 ln 47-67]);
Farabet does not explicitly teach wherein determining whether to add the further test scenario data set or discard the further test scenario data set is based on a first degree of correspondence of at least one cluster of the further test scenario data set to the at least one scenario element of the requirements profile and a second degree of correspondence of the at least one cluster of the further test scenario data set to the at least one cluster of the test scenario data sets comprised in the scenarios library
wherein the one or more computing devices are configured to add the further test scenario data set to the scenarios library based on the first degree of correspondence being greater than a first threshold value and further based on the second degree of correspondence being less than a second threshold value
and wherein the testing system is further configured to: cluster test scenario data sets comprised in the scenarios library
select a test scenario data set having the scenario element based on the clustering
and carry out the automated driving functions of the motor vehicle based on the selected test scenario data set.
However, Dev does wherein determining whether to add the further test scenario data set or discard the further test scenario data set is based on a first degree of correspondence of at least one cluster of the further test scenario data set to the at least one scenario element of the requirements profile and a second degree of correspondence of the at least one cluster of the further test scenario data set to the at least one cluster of the test scenario data sets comprised in the scenarios library (“The computer-readable medium 106 may include QoRE-aware cognitive engine which may generate an output (e.g. a driving scenario or virtual environment data) for an input data (e.g. Operational Design Domain (ODD), real-world data, etc.). Further, the computer-readable storage medium 106 may store instructions that, when executed by the one or more processors 104, cause the one or more processors 104 to validate at least one feature of at least one of the ADAS and the AV based on a set of performance metrics corresponding to the at least one feature, in accordance with aspects of the present disclosure. The computer-readable storage medium 106 may also store various data (for example, real-world data, ODD, driving scenarios, virtual environment data, set of performance metrics, evaluation reports, and the like) that may be captured, processed, and/or required by the system 100.” [Col.4 ln 6-24] and “The QoRE-aware cognitive engine 202 may increase the level of complexity of the driving scenario when the set of performance metrics corresponding to the at least one feature of at least one of the ADAS and the AV is above a predefined threshold within a predefined number of iterations. The increasing is based on one or more of the at least one feature of at least one of the ADAS and the AV, the map location selected for the AV, the set of traffic rules corresponding to the map location, the AV, a configuration of a plurality of sensors coupled to the AV, previous driving scenarios, and the set of performance metrics for the previous driving scenarios.” [Col.6 ln 50-67]);
wherein the one or more computing devices are configured to add the further test scenario data set to the scenarios library based on the first degree of correspondence being greater than a first threshold value and further based on the second degree of correspondence being less than a second threshold value (“For each of a plurality of iterations, the method may further include determining a set of performance metrics corresponding to the at least one feature of the at least one of the ADAS and the AV in the driving scenario based on the simulating.” [Col.1 ln 40-60]);
and wherein the testing system is further configured to: cluster test scenario data sets comprised in the scenarios library (“The driving scenario includes a level of complexity. For each of a plurality of iterations, the processor-executable instructions, on execution, may further cause the processor to simulate at least one of the ADAS and the AV based on the driving scenario. For each of a plurality of iterations, the processor-executable instructions, on execution, may further cause the processor to determine a set of performance metrics corresponding to the at least one feature of at least one of the ADAS and the AV in the driving scenario based on the simulating.” [Col.1 ln 50-67]);
select a test scenario data set having the scenario element based on the clustering (“In an embodiment, the increasing is based on one or more of the at least one feature of at least one of the ADAS and the AV, the map location selected for the AV, the set of traffic rules corresponding to the map location, the AV, a configuration of a plurality of sensors coupled to the AV, previous driving scenarios, and the set of performance metrics for the previous driving scenarios.” [Col.6 ln 54-67]);
and carry out the automated driving functions of the motor vehicle based on the selected test scenario data set (“a distributed dynamic action system 212, a reverse actuation system 214, a local and global knowledge base 218, a perception system 222, and reporting and visualization 226. It may be noted that the ADAS and AV testing device 200 validates an ADAS/AV stack 208. The ADAS/AV stack 208 may be implemented within at least one of the ADAS and the AV.” [Col.4 ln 35-50]). Both Farabet and Dev teach methods for virtual vehicle environments testing of automated driving functions of a motor vehicle. However, only Dev explicitly teaches determining whether to add or discard further test scenario data based on a first degree of correspondence of at least one cluster of the further test scenario data set to at least one scenario element of the requirements profile and a second degree of correspondence of the at least one cluster of the further test scenario data set to the at least one cluster.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the stability monitoring method of Farabet to also include determining whether to add or discard further test scenario data based on a first degree of correspondence of at least one cluster of the further test scenario data set to at least one scenario element of the requirements profile and a second degree of correspondence of the at least one cluster of the further test scenario data set to the at least one cluster, as taught by Dev, with a reasonable expectation of success. Doing so improves the methods for virtual vehicle environments testing (With regard to this reasoning, see at least [Dev, Col.1]).
Regarding claim 11, Farabet discloses The system according to claim 1, wherein, the testing system is further configured to: based on the first degree of correspondence being below a threshold value, expand the requirements profile with scenario elements not previously included in the requirements profile (“The simulated environment 410 may be generated using virtual data, real-world data, or a combination thereof. For example, the simulated environment may include real-world data augmented or changed using virtual data to generate combined data that may be used to simulate certain scenarios or situations with different and/or added elements (e.g., additional AI objects, environmental features, weather conditions, etc.). For example, pre-recorded video may be augmented or changed to include additional pedestrians, obstacles, and/or the like, such that the virtual objects (e.g., executing the software stack(s) 116 as HIL objects and/or SIL objects) may be tested against variations in the real-world data.” [Col.11 ln 22-35]).
Regarding claim 13, Farabet discloses The system according to claim 1, wherein the first degree of correspondence is determined by determining an overlap between the at least one cluster of the further test scenario data set, the at least one scenario element of the requirements profile, and/or the at least one cluster of the number of test scenario data sets comprised in the scenarios library (“In some examples, as described herein, the re-simulation system 100 may overlap with simulation system(s) 400A, 400B, 400C, and/or 400D in that at least some of the testing, training, verification, and/or validation may be performed within a simulated environment.” [Col.5 ln 55-60]).
Regarding claim 14, Farabet discloses The system according to claim 1, wherein, based on the further test scenario data set fully matching the at least one scenario element of the requirements profile, a number of matching clusters of the further test scenario data set are capable of being modified with the at least one cluster of the number of test scenario data sets comprised in the scenarios library by adjusting at least one filter criterion which specifies a maximum number of matching clusters to be determined (“The deep-learning infrastructure may run its own neural network to identify the objects and compare them with the objects identified by the vehicle 102 and, if the results do not match and the infrastructure concludes that the AI in the vehicle 102 is malfunctioning, the server(s) 1178 may transmit a signal to the vehicle 102 instructing a fail-safe computer of the vehicle 102 to assume control, notify the passengers, and complete a safe parking maneuver. “ [Col.48 ln 35-45] and “The accelerator(s) 1114 (e.g., the hardware accelerator cluster) have a wide array of uses for autonomous driving. The PVA may be a programmable vision accelerator that may be used for key processing stages in ADAS and autonomous vehicles. The PVA's capabilities are a good match for algorithmic domains needing predictable processing, at low power and low latency. In other words, the PVA performs well on semi-dense or dense regular computation, even on small data sets, which need predictable run-times with low latency and low power. Thus, in the context of platforms for autonomous vehicles, the PVAs are designed to run classic computer vision algorithms, as they are efficient at object detection and operating on integer math.” See at least [Col.36-37 ln 61-67, ln 1-15]).
Regarding claim 17, Farabet discloses The system according to claim 1, wherein the test scenario data sets and/or the further test scenario data set have annotated scenario elements and are created on the basis of real or virtually-generated measurement data (“The data store(s) 120 may store sensor data and/or virtual sensor data generated by one or more real-world sensors of one or more vehicle(s) 102 and/or virtual sensors of one or more virtual vehicles, respectively.” [Col. ln 10-15]).
Claims 3, 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Farabet in view of Dev and in further view of TKACHENKO et al. (US 20220230536 A1), hereinafter referred to as Farabet, Dev and TKACHENKO respectively.
Regarding claims 3, 5 and 12, Farabet in view of Dev discloses The system according to claim 1,
Farabet in view of Dev does not explicitly teach wherein the first threshold value is 50%, and wherein the second threshold value is 50%.
However, TKACHENKO does teach wherein the first threshold value is 50%, and wherein the second threshold value is 50%. (“In a further preferential embodiment, at least the selecting of the sensor data stream section, the ascertaining of the measure of similarity and the assigning of the sensor data stream section are performed repeatedly. Preferably, when repeatedly selecting the section of the sensor data stream, the start and/or end of the sensor data stream section is/are thereby selected such that the sensor data stream section overlaps a previously selected sensor data stream section by no more than halfway. In other words, a further section of the sensor data stream is not mapped onto at least one template until half of the predefined period of time has elapsed. Thereby able to be ensured is that two sections of the sensor data stream mapped successively onto a plurality of templates will sufficiently differ from one another so as to be able to be assigned to different known traffic scenarios. [0047]). Both Farabet in view of Dev and TKACHENKO teach methods for virtual vehicle environments testing of automated driving functions of a motor vehicle. However, only TKACHENKO explicitly teaches wherein the first threshold value is 50%, and wherein the second threshold value is 50%.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the stability monitoring method of Farabet in view of Dev to also include wherein the first threshold value is 50%, and wherein the second threshold value is 50%, as taught by TKACHENKO, with a reasonable expectation of success. Doing so improves the methods for virtual vehicle environments testing (With regard to this reasoning, see at least [TKACHENKO, 0003-0009]).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED ALKIRSH whose telephone number is (703) 756-4503. The examiner can normally be reached M-F 9:00 am-5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FADEY JABR can be reached on (571) 272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
AHMED ALKIRSHExaminer, Art Unit 3668
/Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668