DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Responsive to the communication dated 12/24/2025
Claims 1-7, and 9-19 are presented for examination
Information Disclosure Statement
The IDS dated 08/19/2022, 06/10/2025, and 09/17/2025 has been reviewed. See attached.
Drawings
The drawings dated 08/19/2022 have been reviewed. They are accepted.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Finality
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Response to Arguments - 112
Applicant’s arguments, see page 6, filed 12/24/2025, with respect to the rejection of claims 4, 10, and 17 under 112 have been fully considered and are persuasive. The rejection of claims 4, 10, and 17 under 112 has been withdrawn.
Response to Arguments - 103
Applicant's arguments filed 12/24/2025 have been fully considered but they are not persuasive.
Applicant argues that no prior art teaches wherein the at least one simulation module is connected to the control unit of the driver assistance system for signal transmission in order to transmit a sensor signal from the at least one simulation module to the control unit of the driver assistance system
Examiner responds by explaining that this feature is taught by Solmaz and Farabet. Particularly, Solmaz makes obvious wherein the at least one simulation module is connected to the control unit of the driver assistance system for signal transmission in order to transmit ([Page 5 Col 1 Par 1] “The vehicle for instance is integrated as follows: • mechanical integration: There is a mechanical coupling between the test bench and the vehicle from the wheels to the rollers. The steering capability of the xRoad Curve also allows the specification of steering manoeuvres. • visual integration: The test scenarios and road traffic is simulated and integrated as rendered vehicle environment on a screen in front of the camera. • electric bus integration: Camera integration as well as command of throttle position, brake pedal and steering angle is communicated via CAN. The data exchange between test bench and simulation is via UDP” [Page 4 Col 1 Par 4 – Col 2 Par 1] “Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.” [Page 3 Col 1 Par 1] “… the vehicle has an NVidia Drive PX/2 and D-Space MicroAutobox-II [20] control units that are used as the on-board processing (i.e., the development ECU) hardware.” [Fig.6 and Fig. 8] Show configuration of the system. Note the connection between the camera, ADAS-KIT (control unit) and the Model CONNECT unit (simulation module) )
Farabet teaches transmitting a sensor signal to a control unit ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)” [Par 80] “The vehicle simulator component(s) 406 may include one or more GPUs 452 (e.g., NVIDIA QUADRO GPU(s)) that may provide, in an example, non-limiting embodiment, 8 DP/HDMI video streams that may be synchronized using sync component(s) 454 (e.g., through a QUADRO Sync II Card). These GPU(s) 452 (and/or other GPU types) may provide the sensor input to the SoC(s) 1104 (e.g., to the vehicle hardware 104).” [Par 29] “In any example, the vehicle hardware 104 may include the hardware of the vehicle 102 that is used to control the vehicle 102 through real-world environments based on the sensor data, one or more machine learning models (e.g., neural networks), and/or the like.”)
Applicant argues that no prior art teaches wherein the at least one simulation module is a separate structural unit of the system and is positioned and operated independently of the motor vehicle and of the vehicle test bench.
Examiner responds by explaining that this is taught by new reference Influences of weather phenomena on automotive laser radar systems (hereinafter Rasshofer). In particular, Rasshofer teaches an external simulation system for use with hardware-in-the-loop automotive testing. ([Fig. 11] Shows the external testing rig with main body, sensor, and stimulation unit. As can be seen, the unit is clearly separate from the vehicle or test bench. [Page 58 Col 2 Par 3 – Page 59 Col 1 Par 1] “To prove the basic functionality of the OSS, a 16-channel automotive laser radar was stimulated with both synthetic and pre-recorded real-world sensor signals. With a reference laser radar sensor, the backscatter signal of a hard laser radar target located in fog was recorded (Fig.12).This signal was replicated by the OSS and detected by another laser radar in the lab. As can be seen(Fig.13), very accurate simulation of the target located in foggy environment could be reached.” [Page 57 Col 1 Par 2 – Page 58 Col 1 Par 1] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors. To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.” [Page 59 Col 2 Par 1] “The measured results showed that the OSS concept is a powerful approach for real-time HIL testing of automotive laser radar sensors. With the OSS concept, it is possible to test a laser radar’s response to either real-world data obtained from test drives, or simulated data from hard or soft target models. Furthermore, a combination of measured data with numeric target modeling might be another option in many situations. Data once recored with a reference laser radar might be con verted to signals stimulating other models of laser radar using Eq. (27)”)
Rasshofer is analogous art because it is within the field of driver assistance system testing. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz and Farabet before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to better simulate certain test conditions, particularly weather conditions. As noted by Rasshofer, the physical testing of sensors under certain weather effects can be difficult to perform, as they would require such conditions to actually be present. The desire to test multiple sensors of different specifications and manufacturers under consistent conditions further complicates this. ([Page 57 Col 1 Par 2] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors.”) To this end, Rasshofer presents a method for the testing and physical stimulation of actual automotive laser sensors. ([Page 57 Col 2 Par 1 – Page 58 Col 1 Par 1] “To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.”) It would have been obvious to one of ordinary skill in the art that combining Rasshofer with Solmaz and Farabet would result in a system that allows for the physical testing and simulation-based stimulation of a larger set of automotive sensors, ultimately resulting in a more accurate system overall.
Applicant argues that Farabet teaches away from Solmaz
Examiner responds by explaining that, while Farabet does teach certain embodiments that do not require a dedicated test bench, it still would be obvious to one of ordinary skill in the art to combine it with Solmaz, particularly to cover driving scenarios that would be impossible or impractical to test the assistance system on with a physical test bench, for example complex crashes and rollovers. Further, Solmaz notes the frequent use of multiple sensors in ADAS systems, while the system developed therein only featured camera systems, and the desirability of using more than just cameras in the simulation. ([Page 7 Col 1 Par 2] “Another issue is that demonstrated use case only involved camera stimulation. There are many ADAS function solutions that utilize not only cameras but also combinations of cameras, radars, lidars, and/or ultrasonic sensors. How to stimulate or simulate all these sensors in the scope of a consistent testing methodology is another open problem.”) In such a situation, it would have logically made sense to one of ordinary skill in the art to use a system such as that of Farabet to emulate the non-camera sensors, allowing the system to function with a full range of sensor inputs. All in all, while the system of Farabet may not have been developed with the sole goal of dedicated physical testbenches in mind, the benefits of such a system to supply additional sensor input, unconstrained by the limitations of a purely physical testbench, would be immediately apparent to one of ordinary skill in the art. Further, Farabet does not just briefly mention a hardware vehicle, but describes hardware-in-the-loop systems with physical vehicles throughout the disclosure; see [Pars 53, 55, 64-65 71, 79, etc.]
Applicant argues that neither Solmaz nor Farabet teach testing an entire driver assistance system, including the connection between an environmental sensor and the control unit of the driver assistance system
Examiner responds by explaining that such a complete driver assistance system is clearly disclosed by Solmaz; see [Fig. 8] which shows a connection between the camera sensor and the ADAS-KIT driver assistance system.
Applicant argues that it would be illogical to “remove [the] camera, install it in a simulation module that is separate from the vehicle, and then provide a connection between the simulation module and the vehicle.”
Examiner responds by explaining this is not a reasonable interpretation of how the references could be combined. A more logical interpretation of the combination of references, would be a system in which a vehicle that has its own sensors is connected to an external simulation device that also has its own sensors, as disclosed by the combination of Solmaz, Farabet, and Rasshofer.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 9-13, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over A Novel Testbench for Development, Calibration and Functional Testing of ADAS/AD Functions (hereinafter Solmaz) in view of Farabet (US 20190303759 A1)in further view of Influences of weather phenomena on automotive laser radar systems (Hereinafter Rasshofer)
Claim 1. Solmaz teaches A system for testing a driver assistance system of a motor vehicle, ([Abstract] “In this paper, a novel steerable chassis dynamometer test bench with a corresponding methodology is introduced to develop and test ADAS/AD technologies. A use case example of tuning the camera-based implementations of the lane keeping assistant (LKA) and adaptive cruise control (ACC) functions on an AD demonstrator vehicle utilizing this test bench is described with preliminary performance results.”) wherein the driver assistance system comprises: a control unit configured to process sensor signals of at least one environment sensor of the motor vehicle, wherein the at least one environment sensor is configured to detect environmental information and convert the environmental information into the sensor signals, wherein the testing system comprises: ([Page 3 Col 1 Par 1] “The vehicle itself is a Ford Mondeo MY2016 platform, that is equipped with ADAS kit produced by Data Speed Inc. [19] along with a comprehensive set of add-on hardware & software interfaces that allow for the full control of the throttle, brake, steering, and shifting of the test vehicle. The picture of the vehicle along with the installed sensor hardware is shown in Figure 4. While not indicated in this picture, the vehicle has an NVidia Drive PX/2 and D-Space MicroAutobox-II [20] control units that are used as the on-board processing (i.e., the development ECU) hardware. Out of the sensor set, only the MobileEye traffic camera [21] was used for the use-case demonstration as the active sensing element for the ADAS functions tested and the stimulation of the MobilEye camera was achieved by integrating the system with a co-simulation framework known as the Model.CONNECT™ [22].” [Fig. 4] Shows sensor setup)
PNG
media_image1.png
427
456
media_image1.png
Greyscale
a vehicle test bench configured such that a drive train of the motor vehicle can be operated; ([Page 1 Col 2 Par 3 – Page 2 Col 1 Par 1] “In this paper, a novel steerable chassis dynamometer test bench with a corresponding methodology is introduced to develop and test ADAS/AD technologies and active safety systems utilizing a special, purpose-built VEHIL system. In comparison to conventional rolling test benches (i.e., chassis dynamometers), where physical steering of the vehicle is not possible, this novel test-bench allows independent steering of the front wheels by rotating the front set of rollers around respective vertical axes. The speed and angle regulation of these rollers keep the vehicle orientation fixed on the test bench. This allows longitudinal and lateral inputs into the vehicle by the driver or automated vehicle control actuators, in terms of gas, brake and steering inputs. A use case example of tuning the camera-based implementations of the lane keeping assistant (LKA) and adaptive cruise control (ACC) functions on the VIRTUAL VEHICLE Automated Drive Demonstrator (ADD) vehicle [16] utilizing this test bench is described. In this setup, the driving scenario and the functional testing environment is dynamically and visually simulated in closed loop with the real vehicle, which is equipped with the autonomous driving function being tested with realistic rolling conditions on the test bench. Using this method, various driving conditions can be virtually simulated and easily reproduced. This enables that the corresponding ADAS function can be tuned to conform to the expected performance objectives.” [Fig. 3] Shows the vehicle driving on the test bench)
PNG
media_image2.png
419
427
media_image2.png
Greyscale
at least one simulation module ([Page 3 Col 1 Par 1] “the MobileEye traffic camera [21] was used for the use-case demonstration as the active sensing element for the ADAS functions tested and the stimulation of the MobilEye camera was achieved by integrating the system with a co-simulation framework known as the Model.CONNECT™ [22].” [Page 4 Col 1 Par 4 – Col 2 Par 1] “The overall hybrid vehicle-in-the-loop simulation system implementation architecture suggested in this paper is seen in Figure 6. This setup is based on the DURR X-Road Curve ¨ steerable chassis dynamometer, which is coupled with the co-simulation environment utilizing Model.CONNECT™ [22], thereby enabling real time stimulation of the MobilEye camera for the testing of ADAS functions. Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.”[Fig.6 and Fig. 8] Show configuration of the system, including the VTD computer and Model CONNECT, i.e. the simulation module)
PNG
media_image3.png
351
425
media_image3.png
Greyscale
PNG
media_image4.png
304
426
media_image4.png
Greyscale
wherein the at least one environment sensor of the testing system corresponds to the at least one environment sensor of the motor vehicle or is the at least one environment sensor of the motor vehicle; and ([Page 3 Col 1 Par 1] “The vehicle itself is a Ford Mondeo MY2016 platform, that is equipped with ADAS kit produced by Data Speed Inc. [19] along with a comprehensive set of add-on hardware & software interfaces that allow for the full control of the throttle, brake, steering, and shifting of the test vehicle. The picture of the vehicle along with the installed sensor hardware is shown in Figure 4. While not indicated in this picture, the vehicle has an NVidia Drive PX/2 and D-Space MicroAutobox-II [20] control units that are used as the on-board processing (i.e., the development ECU) hardware. Out of the sensor set, only the MobileEye traffic camera [21] was used for the use-case demonstration as the active sensing element for the ADAS functions tested and the stimulation of the MobilEye camera was achieved by integrating the system with a co-simulation framework known as the Model.CONNECT™ [22].” [Fig. 4] Shows sensor setup)
PNG
media_image1.png
427
456
media_image1.png
Greyscale
wherein the at least one simulation module is connected to the control unit of the driver assistance system for signal transmission in order to transmit to the control unit of the driver assistance system and ([Page 5 Col 1 Par 1] “The vehicle for instance is integrated as follows: • mechanical integration: There is a mechanical coupling between the test bench and the vehicle from the wheels to the rollers. The steering capability of the xRoad Curve also allows the specification of steering manoeuvres. • visual integration: The test scenarios and road traffic is simulated and integrated as rendered vehicle environment on a screen in front of the camera. • electric bus integration: Camera integration as well as command of throttle position, brake pedal and steering angle is communicated via CAN. The data exchange between test bench and simulation is via UDP” [Page 4 Col 1 Par 4 – Col 2 Par 1] “Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.” [Page 3 Col 1 Par 1] “… the vehicle has an NVidia Drive PX/2 and D-Space MicroAutobox-II [20] control units that are used as the on-board processing (i.e., the development ECU) hardware.” [Fig.6 and Fig. 8] Show configuration of the system. Note the connection between the camera, ADAS-KIT (control unit) and the Model CONNECT unit (simulation module)
Solmaz does not explicitly teach a simulation configured to accommodate at least one environment sensor; transmitting a sensor signal from a simulation to a control unit; wherein the at least one simulation module is a separate structural unit of the system and is positioned and operated independently of the motor vehicle and of the vehicle test bench.
Farabet makes obvious a simulation configured to accommodate at least one environment sensor; transmitting a sensor signal from a simulation to a control unit ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)” [Par 80] “The vehicle simulator component(s) 406 may include one or more GPUs 452 (e.g., NVIDIA QUADRO GPU(s)) that may provide, in an example, non-limiting embodiment, 8 DP/HDMI video streams that may be synchronized using sync component(s) 454 (e.g., through a QUADRO Sync II Card). These GPU(s) 452 (and/or other GPU types) may provide the sensor input to the SoC(s) 1104 (e.g., to the vehicle hardware 104).” [Par 29] “In any example, the vehicle hardware 104 may include the hardware of the vehicle 102 that is used to control the vehicle 102 through real-world environments based on the sensor data, one or more machine learning models (e.g., neural networks), and/or the like.”)
Farabet is analogous art because it is within the field of vehicle automation. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to generate additional testing scenarios using additional sensor configurations. For example, Solmaz notes that the system it discloses only tests using a camera, but full ADAS systems frequently use several sensors ([Page 7 Col 1 Par 2] “Another issue is that demonstrated use case only involved camera stimulation. There are many ADAS function solutions that utilize not only cameras but also combinations of cameras, radars, lidars, and/or ultrasonic sensors. How to stimulate or simulate all these sensors in the scope of a consistent testing methodology is another open problem.”) To this end, Farabet presents a system involving the simulation and testing of a vehicle using a wide array of sensor types ([Par 27-28] “For example, the system 100 may be used for training, testing, verifying, deploying, updating, re-verifying, and/or deploying one or more neural networks for use in an autonomous vehicle, a semi-autonomous vehicle, a robot, and/or another object. … One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments. The sensors of the vehicle(s) 102 may include, without limitation, global navigation satellite systems sensor(s) 1158 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1160, ultrasonic sensor(s) 1162, LIDAR sensor(s) 1164, inertial measurement unit (IMU) sensor(s) 1166 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1196, stereo camera(s) 1168, wide-view camera(s) 1170 (e.g., fisheye cameras), infrared camera(s) 1172, surround camera(s) 1174 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1198, speed sensor(s) 1144 (e.g., for measuring the speed of the vehicle 102), vibration sensor(s) 1142, steering sensor(s) 1140, brake sensor(s) (e.g., as part of the brake sensor system 1146), and/or other sensor types.” [Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.).”) Overall, one of ordinary skill in the art would have recognized that combining Farabet with Solmaz would result in a system that allowed the testing the capabilities of vehicles with a much wider array of sensor configurations.
The combination of Solmaz and Farabet does not explicitly teach wherein the at least one simulation module is a separate structural unit of the system and is positioned and operated independently of the motor vehicle and of the vehicle test bench.
Rasshofer makes obvious wherein the at least one simulation module is a separate structural unit of the system and is positioned and operated independently of the motor vehicle and of the vehicle test bench. ([Fig. 11] Shows the external testing rig with main body, sensor, and stimulation unit. As can be seen, the unit is clearly separate from the vehicle or test bench. Also see [Fig. 10] [Page 58 Col 2 Par 3 – Page 59 Col 1 Par 1] “To prove the basic functionality of the OSS, a 16-channel automotive laser radar was stimulated with both synthetic and pre-recorded real-world sensor signals. With a reference laser radar sensor, the backscatter signal of a hard laser radar target located in fog was recorded (Fig.12).This signal was replicated by the OSS and detected by another laser radar in the lab. As can be seen(Fig.13), very accurate simulation of the target located in foggy environment could be reached.” [Page 57 Col 1 Par 2 – Page 58 Col 1 Par 1] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors. To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.” [Page 59 Col 2 Par 1] “The measured results showed that the OSS concept is a powerful approach for real-time HIL testing of automotive laser radar sensors. With the OSS concept, it is possible to test a laser radar’s response to either real-world data obtained from test drives, or simulated data from hard or soft target models. Furthermore, a combination of measured data with numeric target modeling might be another option in many situations. Data once recored with a reference laser radar might be con verted to signals stimulating other models of laser radar using Eq. (27)”)
Rasshofer is analogous art because it is within the field of driver assistance system testing. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz and Farabet before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to better simulate certain test conditions, particularly weather conditions. As noted by Rasshofer, the physical testing of sensors under certain weather effects can be difficult to perform, as they would require such conditions to actually be present. The desire to test multiple sensors of different specifications and manufacturers under consistent conditions further complicates this. ([Page 57 Col 1 Par 2] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors.”) To this end, Rasshofer presents a method for the testing and physical stimulation of actual automotive laser sensors. ([Page 57 Col 2 Par 1 – Page 58 Col 1 Par 1] “To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.”) It would have been obvious to one of ordinary skill in the art that combining Rasshofer with Solmaz and Farabet would result in a system that allows for the physical testing and simulation-based stimulation of a larger set of automotive sensors, ultimately resulting in a more accurate system overall.
Claim 2. Farabet teaches wherein the at least one simulation module is configured to generate sensor signals which depict the environmental information from the perspective of the at least one environment sensor of the motor vehicle. ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)”
Claim 3. Solmaz teaches wherein the simulation module is connected to the vehicle test bench by a connection for signal transmission ([Page 5 Col 1 Par 1] “The vehicle for instance is integrated as follows: • mechanical integration: There is a mechanical coupling between the test bench and the vehicle from the wheels to the rollers. The steering capability of the xRoad Curve also allows the specification of steering manoeuvres. • visual integration: The test scenarios and road traffic is simulated and integrated as rendered vehicle environment on a screen in front of the camera. • electric bus integration: Camera integration as well as command of throttle position, brake pedal and steering angle is communicated via CAN. The data exchange between test bench and simulation is via UDP” [Fig. 8] Further clarifies the physical connections between components)
Claim 4. Farabet teaches wherein the at least one environment sensor of the testing system is identical to the at least one environment sensor of the motor vehicle. ([Par 71] “In such an example, data (e.g., virtual sensor data corresponding to a field(s) of view of virtual camera(s) of the virtual vehicle, virtual LIDAR data, virtual RADAR data, virtual location data, virtual IMU data, etc.) corresponding to each sensor of the HIL object may be received from the simulator component(s) 402”)
Claim 5. Solmaz teaches ([Fig. 4] Shows a list of sensors on the testing vehicle)
Farabet makes obvious wherein the system is configured to accommodate at least two simulation modules, wherein two of the at least two simulation modules differ in a measuring principle applied by their respective sensor ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)” [Examiner’s note: each instance of the engine is interpreted as a different simulation module])
Claim 6. Solmaz teaches ([Fig. 4] Shows a list of sensors on the testing vehicle)
Farabet teaches wherein the system is configured to accommodate at least two environment sensors, wherein two of the at least two environment sensors of the system differ in a measuring principle they apply. ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)”)
Claim 7. Solmaz teaches wherein the at least one environment sensor of the testing system is configured to be integrated in a component that is configured to be accommodated by the at least one simulation module. ([Page 3 Col 1 Par 1] “the MobileEye traffic camera [21] was used for the use-case demonstration as the active sensing element for the ADAS functions tested and the stimulation of the MobilEye camera was achieved by integrating the system with a co-simulation framework known as the Model.CONNECT™ [22].” [Page 4 Col 1 Par 4 – Col 2 Par 1] “The overall hybrid vehicle-in-the-loop simulation system implementation architecture suggested in this paper is seen in Figure 6. This setup is based on the DURR X-Road Curve ¨ steerable chassis dynamometer, which is coupled with the co-simulation environment utilizing Model.CONNECT™ [22], thereby enabling real time stimulation of the MobilEye camera for the testing of ADAS functions. Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.”[Fig.6 and Fig. 8] Show the configuration of the system. Note that figure 8 shows the sensor and model connected together via CAN)
Claim 9. Solmaz teaches wherein the stimulation device is configured to generate a response signal to be received by the at least one environment sensor of the testing system. ([Page 3 Col 1 Par 1] “the MobileEye traffic camera [21] was used for the use-case demonstration as the active sensing element for the ADAS functions tested and the stimulation of the MobilEye camera was achieved by integrating the system with a co-simulation framework known as the Model.CONNECT™ [22].” [Page 4 Col 1 Par 4 – Col 2 Par 1] “The overall hybrid vehicle-in-the-loop simulation system implementation architecture suggested in this paper is seen in Figure 6. This setup is based on the DURR X-Road Curve ¨ steerable chassis dynamometer, which is coupled with the co-simulation environment utilizing Model.CONNECT™ [22], thereby enabling real time stimulation of the MobilEye camera for the testing of ADAS functions. Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.”[Fig.6 and Fig. 8] Show the configuration of the system)
Claim 10. Solmaz teaches wherein the system comprises at least one signal converter which is configured to transmit a sensor signal to the control unit of the motor vehicle and to generate the sensor signal on the basis of raw sensor data, wherein the raw sensor data is fed into the signal converter, and the signal converter generates the sensor signal ([Fig. 8] Shows the configuration of the system, including the sensor being connected to the ADAS KIT and MWC (i.e. the control units) via a CAN connection. [Examiner’s note: based on the specification this “signal converter” is interpreted as any mechanism that converts the raw sensor input (i.e. an image in the case of a camera) and transforms it into an electrical signal that can be sent elsewhere ([Page 2 Line 30] “An environment sensor within the meaning of the invention is in particular an apparatus for detecting and/or measuring physical variables within its environment, particularly the surroundings of the motor vehicle. Preferably, the environment sensor is configured to survey, in particular scan, the environment or surroundings respectively of the motor vehicle. The environment sensor preferably comprises a receiver and a signal converter, wherein the receiver is preferably directly responsive to the physical or chemical measured variable, or respectively quantitatively and/or qualitatively detects its property, and the signal converter converts said detected property into a preferably electrically transmissible signal.”) With this in mind, the CAN connector of the camera that connects it to the control units reads on this limitation])
Claim 11. Solmaz teaches ([Page 3 Col 1 Par 1] “… the vehicle has an NVidia Drive PX/2 and D-Space MicroAutobox-II [20] control units that are used as the on-board processing (i.e., the development ECU) hardware.”)
Farabet makes obvious wherein the system comprises at least one apparatus configured to generate a simulated sensor signal and transmit the simulator sensor signal to a control unit ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)” [Par 80] “The vehicle simulator component(s) 406 may include one or more GPUs 452 (e.g., NVIDIA QUADRO GPU(s)) that may provide, in an example, non-limiting embodiment, 8 DP/HDMI video streams that may be synchronized using sync component(s) 454 (e.g., through a QUADRO Sync II Card). These GPU(s) 452 (and/or other GPU types) may provide the sensor input to the SoC(s) 1104 (e.g., to the vehicle hardware 104).” [Par 29] “In any example, the vehicle hardware 104 may include the hardware of the vehicle 102 that is used to control the vehicle 102 through real-world environments based on the sensor data, one or more machine learning models (e.g., neural networks), and/or the like.”)
Claim 12. Solmaz teaches A method for testing a driver assistance system of a motor vehicle on a vehicle test bench ([Abstract] “In this paper, a novel steerable chassis dynamometer test bench with a corresponding methodology is introduced to develop and test ADAS/AD technologies. A use case example of tuning the camera-based implementations of the lane keeping assistant (LKA) and adaptive cruise control (ACC) functions on an AD demonstrator vehicle utilizing this test bench is described with preliminary performance results.”) comprising: simulating a test environment via at least one simulation module; ([Page 3 Col 1 Par 1] “the MobileEye traffic camera [21] was used for the use-case demonstration as the active sensing element for the ADAS functions tested and the stimulation of the MobilEye camera was achieved by integrating the system with a co-simulation framework known as the Model.CONNECT™ [22].” [Page 4 Col 1 Par 4 – Col 2 Par 1] “The overall hybrid vehicle-in-the-loop simulation system implementation architecture suggested in this paper is seen in Figure 6. This setup is based on the DURR X-Road Curve ¨ steerable chassis dynamometer, which is coupled with the co-simulation environment utilizing Model.CONNECT™ [22], thereby enabling real time stimulation of the MobilEye camera for the testing of ADAS functions. Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.”[Fig.6 and Fig. 8] Show configuration of the system) operating a drive train of the motor vehicle on the vehicle test bench based on the sensor signal ([Page 1 Col 2 Par 3 – Page 2 Col 1 Par 1] “In this paper, a novel steerable chassis dynamometer test bench with a corresponding methodology is introduced to develop and test ADAS/AD technologies and active safety systems utilizing a special, purpose-built VEHIL system. In comparison to conventional rolling test benches (i.e., chassis dynamometers), where physical steering of the vehicle is not possible, this novel test-bench allows independent steering of the front wheels by rotating the front set of rollers around respective vertical axes. The speed and angle regulation of these rollers keep the vehicle orientation fixed on the test bench. This allows longitudinal and lateral inputs into the vehicle by the driver or automated vehicle control actuators, in terms of gas, brake and steering inputs. A use case example of tuning the camera-based implementations of the lane keeping assistant (LKA) and adaptive cruise control (ACC) functions on the VIRTUAL VEHICLE Automated Drive Demonstrator (ADD) vehicle [16] utilizing this test bench is described. In this setup, the driving scenario and the functional testing environment is dynamically and visually simulated in closed loop with the real vehicle, which is equipped with the autonomous driving function being tested with realistic rolling conditions on the test bench. Using this method, various driving conditions can be virtually simulated and easily reproduced. This enables that the corresponding ADAS function can be tuned to conform to the expected performance objectives.” [Fig. 3] Shows the vehicle driving on the test bench)
PNG
media_image2.png
419
427
media_image2.png
Greyscale
Solmaz does not explicitly teach generating a sensor signal by the at least one simulation module; performing an operation based on the sensor signal generated via the at least one simulation module, wherein the simulation module is a structural unit which is separate from the vehicle and the vehicle test bench and is positioned and operated independently of the motor vehicle and of the vehicle test bench.
Farabet makes obvious generating a sensor signal by the at least one simulation module; performing an operation based on the sensor signal generated via the at least one simulation module ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)” [Par 80] “The vehicle simulator component(s) 406 may include one or more GPUs 452 (e.g., NVIDIA QUADRO GPU(s)) that may provide, in an example, non-limiting embodiment, 8 DP/HDMI video streams that may be synchronized using sync component(s) 454 (e.g., through a QUADRO Sync II Card). These GPU(s) 452 (and/or other GPU types) may provide the sensor input to the SoC(s) 1104 (e.g., to the vehicle hardware 104).” [Par 29] “In any example, the vehicle hardware 104 may include the hardware of the vehicle 102 that is used to control the vehicle 102 through real-world environments based on the sensor data, one or more machine learning models (e.g., neural networks), and/or the like.”)
Farabet is analogous art because it is within the field of vehicle automation. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to generate additional testing scenarios using additional sensor configurations. For example, Solmaz notes that the system it discloses only tests using a camera, but full ADAS systems frequently use several sensors ([Page 7 Col 1 Par 2] “Another issue is that demonstrated use case only involved camera stimulation. There are many ADAS function solutions that utilize not only cameras but also combinations of cameras, radars, lidars, and/or ultrasonic sensors. How to stimulate or simulate all these sensors in the scope of a consistent testing methodology is another open problem.”) To this end, Farabet presents a system involving the simulation and testing of a vehicle using a wide array of sensor types ([Par 27-28] “For example, the system 100 may be used for training, testing, verifying, deploying, updating, re-verifying, and/or deploying one or more neural networks for use in an autonomous vehicle, a semi-autonomous vehicle, a robot, and/or another object. … One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments. The sensors of the vehicle(s) 102 may include, without limitation, global navigation satellite systems sensor(s) 1158 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1160, ultrasonic sensor(s) 1162, LIDAR sensor(s) 1164, inertial measurement unit (IMU) sensor(s) 1166 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1196, stereo camera(s) 1168, wide-view camera(s) 1170 (e.g., fisheye cameras), infrared camera(s) 1172, surround camera(s) 1174 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1198, speed sensor(s) 1144 (e.g., for measuring the speed of the vehicle 102), vibration sensor(s) 1142, steering sensor(s) 1140, brake sensor(s) (e.g., as part of the brake sensor system 1146), and/or other sensor types.” [Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.).”) Overall, one of ordinary skill in the art would have recognized that combining Farabet with Solmaz would result in a system that allowed the testing the capabilities of vehicles with a much wider array of sensor configurations.
The combination of Solmaz and Farabet does not explicitly teach wherein the simulation module is a structural unit which is separate from the vehicle and the vehicle test bench and is positioned and operated independently of the motor vehicle and of the vehicle test bench.
Rasshofer makes obvious wherein the simulation module is a structural unit which is separate from the vehicle and the vehicle test bench and is positioned and operated independently of the motor vehicle and of the vehicle test bench. ([Fig. 11] Shows the external testing rig with main body, sensor, and stimulation unit. As can be seen, the unit is clearly separate from the vehicle or test bench. Also see [Fig. 10] [Page 58 Col 2 Par 3 – Page 59 Col 1 Par 1] “To prove the basic functionality of the OSS, a 16-channel automotive laser radar was stimulated with both synthetic and pre-recorded real-world sensor signals. With a reference laser radar sensor, the backscatter signal of a hard laser radar target located in fog was recorded (Fig.12).This signal was replicated by the OSS and detected by another laser radar in the lab. As can be seen(Fig.13), very accurate simulation of the target located in foggy environment could be reached.” [Page 57 Col 1 Par 2 – Page 58 Col 1 Par 1] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors. To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.” [Page 59 Col 2 Par 1] “The measured results showed that the OSS concept is a powerful approach for real-time HIL testing of automotive laser radar sensors. With the OSS concept, it is possible to test a laser radar’s response to either real-world data obtained from test drives, or simulated data from hard or soft target models. Furthermore, a combination of measured data with numeric target modeling might be another option in many situations. Data once recored with a reference laser radar might be con verted to signals stimulating other models of laser radar using Eq. (27)”)
Rasshofer is analogous art because it is within the field of driver assistance system testing. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz and Farabet before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to better simulate certain test conditions, particularly weather conditions. As noted by Rasshofer, the physical testing of sensors under certain weather effects can be difficult to perform, as they would require such conditions to actually be present. The desire to test multiple sensors of different specifications and manufacturers under consistent conditions further complicates this. ([Page 57 Col 1 Par 2] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors.”) To this end, Rasshofer presents a method for the testing and physical stimulation of actual automotive laser sensors. ([Page 57 Col 2 Par 1 – Page 58 Col 1 Par 1] “To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.”) It would have been obvious to one of ordinary skill in the art that combining Rasshofer with Solmaz and Farabet would result in a system that allows for the physical testing and simulation-based stimulation of a larger set of automotive sensors, ultimately resulting in a more accurate system overall.
Claim 13. Solmaz teaches A method for testing a driver assistance system of a motor vehicle ([Abstract] “In this paper, a novel steerable chassis dynamometer test bench with a corresponding methodology is introduced to develop and test ADAS/AD technologies. A use case example of tuning the camera-based implementations of the lane keeping assistant (LKA) and adaptive cruise control (ACC) functions on an AD demonstrator vehicle utilizing this test bench is described with preliminary performance results.”) on a test track, ([Page 6 Col 1 Par 2] “Two test scenarios were performed with the described setup to assess ADAS/AD functions. On the one hand an ACC function was tested on the xRoad test bench and on the other hand a Lane Keeping Assistant (LKA) was evaluated. Therefore a driving cycle was defined to demonstrate both functions. Figure 10 shows the driving cylce in VDT [27] with a straight lane for the ACC function test in the top part of the track and a curvy road for LKA tests in the bottom part of the track.”)
PNG
media_image5.png
379
630
media_image5.png
Greyscale
comprising: simulating a test environment by at least one simulation module; ([Page 3 Col 1 Par 1] “the MobileEye traffic camera [21] was used for the use-case demonstration as the active sensing element for the ADAS functions tested and the stimulation of the MobilEye camera was achieved by integrating the system with a co-simulation framework known as the Model.CONNECT™ [22].” [Page 4 Col 1 Par 4 – Col 2 Par 1] “The overall hybrid vehicle-in-the-loop simulation system implementation architecture suggested in this paper is seen in Figure 6. This setup is based on the DURR X-Road Curve ¨ steerable chassis dynamometer, which is coupled with the co-simulation environment utilizing Model.CONNECT™ [22], thereby enabling real time stimulation of the MobilEye camera for the testing of ADAS functions. Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.”[Fig.6 and Fig. 8] Show configuration of the system) operating the motor vehicle on the test track ([Page 6 Col 1 Par 2] “Two test scenarios were performed with the described setup to assess ADAS/AD functions. On the one hand an ACC function was tested on the xRoad test bench and on the other hand a Lane Keeping Assistant (LKA) was evaluated. Therefore a driving cycle was defined to demonstrate both functions. Figure 10 shows the driving cylce in VDT [27] with a straight lane for the ACC function test in the top part of the track and a curvy road for LKA tests in the bottom part of the track.”)
Solmaz does not explicitly teach generating a sensor signal via the at least one simulation module, performing an operation based on the sensor signal generated by the at least one simulation module, wherein the simulation module is a structural unit which is separate from the vehicle and is positioned and operated independently of the motor vehicle.
Farabet makes obvious generating a sensor signal via the at least one simulation module, performing an operation based on the sensor signal generated by the at least one simulation module ([Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.)” [Par 80] “The vehicle simulator component(s) 406 may include one or more GPUs 452 (e.g., NVIDIA QUADRO GPU(s)) that may provide, in an example, non-limiting embodiment, 8 DP/HDMI video streams that may be synchronized using sync component(s) 454 (e.g., through a QUADRO Sync II Card). These GPU(s) 452 (and/or other GPU types) may provide the sensor input to the SoC(s) 1104 (e.g., to the vehicle hardware 104).” [Par 29] “In any example, the vehicle hardware 104 may include the hardware of the vehicle 102 that is used to control the vehicle 102 through real-world environments based on the sensor data, one or more machine learning models (e.g., neural networks), and/or the like.”)
Farabet is analogous art because it is within the field of vehicle automation. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to generate additional testing scenarios using additional sensor configurations. For example, Solmaz notes that the system it discloses only tests using a camera, but full ADAS systems frequently use several sensors ([Page 7 Col 1 Par 2] “Another issue is that demonstrated use case only involved camera stimulation. There are many ADAS function solutions that utilize not only cameras but also combinations of cameras, radars, lidars, and/or ultrasonic sensors. How to stimulate or simulate all these sensors in the scope of a consistent testing methodology is another open problem.”) To this end, Farabet presents a system involving the simulation and testing of a vehicle using a wide array of sensor types ([Par 27-28] “For example, the system 100 may be used for training, testing, verifying, deploying, updating, re-verifying, and/or deploying one or more neural networks for use in an autonomous vehicle, a semi-autonomous vehicle, a robot, and/or another object. … One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments. The sensors of the vehicle(s) 102 may include, without limitation, global navigation satellite systems sensor(s) 1158 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1160, ultrasonic sensor(s) 1162, LIDAR sensor(s) 1164, inertial measurement unit (IMU) sensor(s) 1166 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1196, stereo camera(s) 1168, wide-view camera(s) 1170 (e.g., fisheye cameras), infrared camera(s) 1172, surround camera(s) 1174 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1198, speed sensor(s) 1144 (e.g., for measuring the speed of the vehicle 102), vibration sensor(s) 1142, steering sensor(s) 1140, brake sensor(s) (e.g., as part of the brake sensor system 1146), and/or other sensor types.” [Par 52] “The simulation system 400—e.g., represented by simulation systems 400A, 400B, 400C, and 400D, described in more detail herein—may generate a global simulation that simulates a virtual world or environment (e.g., a simulated environment) that may include artificial intelligence (AI) vehicles or other objects (e.g., pedestrians, animals, etc.), hardware-in-the-loop (HIL) vehicles or other objects, software-in-the-loop (SIL) vehicles or other objects, and/or person-in-the-loop (PIL) vehicles or other objects. The global simulation may be maintained within an engine (e.g., a game engine), or other software-development environment, that may include a rendering engine (e.g., for 2D and/or 3D graphics), a physics engine (e.g., for collision detection, collision response, etc.), sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graphs, cinematics, and/or other features. In some examples, as described herein, one or more vehicles or objects within the simulation system 400 (e.g., HIL objects, SIL objects, PIL objects, AI objects, etc.) may be maintained within their own instance of the engine. In such examples, each virtual sensor of each virtual object may include their own instance of the engine (e.g., an instance for a virtual camera, a second instance for a virtual LIDAR sensor, a third instance for another virtual LIDAR sensor, etc.).”) Overall, one of ordinary skill in the art would have recognized that combining Farabet with Solmaz would result in a system that allowed the testing the capabilities of vehicles with a much wider array of sensor configurations.
The combination of Solmaz and Farabet does not explicitly teach wherein the simulation module is a structural unit which is separate from the vehicle and is positioned and operated independently of the motor vehicle.
Rasshofer makes obvious wherein the simulation module is a structural unit which is separate from the vehicle and is positioned and operated independently of the motor vehicle. ([Fig. 11] Shows the external testing rig with main body, sensor, and stimulation unit. As can be seen, the unit is clearly separate from the vehicle or test bench. Also see [Fig. 10] [Page 58 Col 2 Par 3 – Page 59 Col 1 Par 1] “To prove the basic functionality of the OSS, a 16-channel automotive laser radar was stimulated with both synthetic and pre-recorded real-world sensor signals. With a reference laser radar sensor, the backscatter signal of a hard laser radar target located in fog was recorded (Fig.12).This signal was replicated by the OSS and detected by another laser radar in the lab. As can be seen(Fig.13), very accurate simulation of the target located in foggy environment could be reached.” [Page 57 Col 1 Par 2 – Page 58 Col 1 Par 1] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors. To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.” [Page 59 Col 2 Par 1] “The measured results showed that the OSS concept is a powerful approach for real-time HIL testing of automotive laser radar sensors. With the OSS concept, it is possible to test a laser radar’s response to either real-world data obtained from test drives, or simulated data from hard or soft target models. Furthermore, a combination of measured data with numeric target modeling might be another option in many situations. Data once recored with a reference laser radar might be con verted to signals stimulating other models of laser radar using Eq. (27)”)
Rasshofer is analogous art because it is within the field of driver assistance system testing. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz and Farabet before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to better simulate certain test conditions, particularly weather conditions. As noted by Rasshofer, the physical testing of sensors under certain weather effects can be difficult to perform, as they would require such conditions to actually be present. The desire to test multiple sensors of different specifications and manufacturers under consistent conditions further complicates this. ([Page 57 Col 1 Par 2] “Frequently, sensor developers and users are faced with the task to evaluate software and hardware improvements of laser radar sensors. In this case, tests under adverse weather conditions usually cannot be done due to a lack of corresponding weather conditions (e.g. lack of snow or fog) during the evaluation phase. Moreover, if sensors from different manufacturers have to be evaluated, it turns out to be very hard to exactly reproduce the same environmental conditions for both sensors.”) To this end, Rasshofer presents a method for the testing and physical stimulation of actual automotive laser sensors. ([Page 57 Col 2 Par 1 – Page 58 Col 1 Par 1] “To overcome these difficulties, a novel electro-optical laser radar target simulator system (short: OSS) has been developed and used as an automotive laser radar target simulator. The OSS is capable to exactly reproduce the optical re turn signals measured by reference laser radars under adverse weather conditions by highly accurate replication of pulse shape, wavelength and power levels. It can handle multiple reflections in one sensor beam and might be extended to be used with scanning laser radar systems, too. With the OSS system, it became possible to measure weather performance enhancements due to hardware and/or software changes during the laser radar design process thus shortening the design cycle and improving the sensor quality. The test signals can either be pre-recorded by the DUT itself or a by reference laser radar. Moreover, the test signals might be generated synthetically using the theory presented in Sect.3.The system is not directly comparable to known millimeter-wave radar target generators (see e.g. Anritsu,2009) since those only take signal delay and signal attenuation into account and cannot change the pulse’s time-domain signature.”) It would have been obvious to one of ordinary skill in the art that combining Rasshofer with Solmaz and Farabet would result in a system that allows for the physical testing and simulation-based stimulation of a larger set of automotive sensors, ultimately resulting in a more accurate system overall.
Claim 18. Solmaz teaches wherein at least one environment sensor of the at least one simulation module is the at least one environment sensor of the motor vehicle. ([Par 86-86] “FIG. 5 is a flow diagram showing a method 500 for generating a simulated environment using a hardware-in-the-loop object, in accordance with some embodiments of the present disclosure. The method 500, at block B502, includes transmitting, from a first hardware component to a second hardware component, simulation data. For example, simulation component(s) 402 may transmit simulation data to one or more of the vehicle simulator component(s) 406, the vehicle simulator component(s) 420, and/or the vehicle simulator component(s) 422. In some examples, the simulation data may be representative of at least a portion of the simulated environment 410 hosted by the simulation component(s) 402, and may correspond to the simulated environment 410 with respect to at least one virtual sensor of a virtual object (e.g., a HIL object, a SIL object, a PIL object, and/or an AI object). In an example where the virtual sensor is a virtual camera, the simulation data may correspond to at least the data from the simulation necessary to generate a field of view of the virtual camera within the simulated environment 410. … In such examples, the signals between the vehicle simulator component(s) 406 (e.g., between the vehicle hardware 104 and one or more GPU(s), CPU(s), and/or computer(s) 436) may be transmitted via a CAN interface, a USB interface, an LVDS interface, an Ethernet interface, and/or another interface. In another example, such as where the virtual object is a SIL object, the signal (or data represented thereby) may be transmitted from the vehicle simulator component(s) 420 to the simulator component(s) 402, where the data included in the signal may be generated by the software stack(s) 116 executing on simulated or emulated vehicle hardware 104.” [Examiner’s note: these paragraphs describe using a simulated version of a sensor to supplant a physical version of that sensor and send that data to the physical vehicle])
Claim 19. Solmaz teaches wherein the at least one simulation module is configured to accommodate the at least one environment sensor of the motor vehicle and a stimulation device. ([Page 4 Col 1 Par 4 – Col 2 Par 1] “The overall hybrid vehicle-in-the-loop simulation system implementation architecture suggested in this paper is seen in Figure 6. This setup is based on the DURR X-Road Curve ¨ steerable chassis dynamometer, which is coupled with the co-simulation environment utilizing Model.CONNECT™ [22], thereby enabling real time stimulation of the MobilEye camera for the testing of ADAS functions. Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.”[Fig.6 and Fig. 8] Show configuration of the system. As can be seen in Fig. 8, the camera is connected to the model which is also connected to the display)
Claims 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over A Novel Testbench for Development, Calibration and Functional Testing of ADAS/AD Functions (hereinafter Solmaz) in view of Farabet (US 20190303759 A1) in further view of Influences of weather phenomena on automotive laser radar systems (Hereinafter Rasshofer) as well as Rear view camera replacement 2010 2011 2012 2013 2014 Ford F-150 (Hereinafter Leo)
Claim 14. Solmaz teaches further comprising: ([Fig.6 and Fig. 8] Show configuration of the system including the simulation module (i.e. the VTD computer and Model CONNECT elements)) ([Page 4 Col 1 Par 4 – Col 2 Par 1] “The overall hybrid vehicle-in-the-loop simulation system implementation architecture suggested in this paper is seen in Figure 6. This setup is based on the DURR X-Road Curve ¨ steerable chassis dynamometer, which is coupled with the co-simulation environment utilizing Model.CONNECT™ [22], thereby enabling real time stimulation of the MobilEye camera for the testing of ADAS functions. Vehicle dynamics is simulated in AVL-VSM software within the Model.CONNECT™ environment, which is driven by the signals from the test vehicle and the test bench. The vehicle dynamics component also controls the environment simulation component to provide over-the-air (OTA) simulation of the driving scenario and a real-time 3D visual representation of the environment. This 3D representation is then linked to a TV screen placed in front of the MobilEye camera mounted on the windshield, so that the vehicle believes that it is driving on a road with dynamic components.”[Fig.6 and Fig. 8] Show configuration of the system) ([Fig.6 and Fig. 8] Show configuration of the system including the simulation module (i.e. the VTD computer and Model CONNECT elements))
The combination of Solmaz, Farabet, and Rasshofer does not explicitly teach mounting at least one environment sensor on or in at least one module; and the at least one environment sensor of at least one module.
Leo makes obvious mounting at least one environment sensor on or in at least one module; and the at least one environment sensor of at least one module. ([3:35- 4:14] Show installation of the sensor)
Leo is analogous art because it is within the field of vehicular sensing. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz, Farabet, and Rasshofer before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to extend the lifetime of a particular test vehicle. As noted by Leo, vehicle sensors have particular lifetimes after which they will need to be replaced in order for them to continue to be used. In particular, Leo notes that the sensor on the vehicle he is working on has a lifetime of only around 4 years ([2:05] “… These cameras usually do fail after about four to five years, some of them even sooner.”) With this in mind, it would be beneficial to be able to replace such sensors and extend the lifetime of test vehicles for as long as possible. To this end, Leo presents a process for replacing such sensors ([0:07] “Hi, so today we’re going to be replacing a camera on a 2012 Ford F-150” [1:28 – 2:00] Shows the removal of the old defective sensor [3:35- 4:14] Show installation of the replacement) Overall, one of ordinary skill in the art would have recognized that combining Leo with Solmaz, Farabet, and Rasshofer would allow parts to be replaced in the test vehicles, allowing those vehicles to be used longer and therefore driving down the long term cost of testing.
Claim 15. Solmaz teaches ([Fig.6 and Fig. 8] Show configuration of the system including the simulation module (i.e. the VTD computer and Model CONNECT elements))
The combination of Solmaz, Farabet, and Rasshofer does not explicitly teach further comprising: mounting a component, which has at least one environment sensor arranged and/or integrated therein, in or on a module
Leo makes obvious further comprising: mounting a component, which has at least one environment sensor arranged and/or integrated therein, in or on a module ([3:35- 4:14] Show installation of the sensor)
Leo is analogous art because it is within the field of vehicular sensing. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz, Farabet, and Rasshofer before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to extend the lifetime of a particular test vehicle. As noted by Leo, vehicle sensors have particular lifetimes after which they will need to be replaced in order for them to continue to be used. In particular, Leo notes that the sensor on the vehicle he is working on has a lifetime of only around 4 years ([2:05] “… These cameras usually do fail after about four to five years, some of them even sooner.”) With this in mind, it would be beneficial to be able to replace such sensors and extend the lifetime of test vehicles for as long as possible. To this end, Leo presents a process for replacing such sensors ([0:07] “Hi, so today we’re going to be replacing a camera on a 2012 Ford F-150” [1:28 – 2:00] Shows the removal of the old defective sensor [3:35- 4:14] Show installation of the replacement) Overall, one of ordinary skill in the art would have recognized that combining Leo with Solmaz, Farabet, and Rasshofer would allow parts to be replaced in the test vehicles, allowing those vehicles to be used longer and therefore driving down the long term cost of testing.
Claim 16. Leo teaches further comprising: detaching a component which has at least one environment sensor arranged and/or integrated therein, from the motor vehicle in order to enable said component to be mounted in the at least one simulation module. ([0:07] Shows original camera in original housing attached to the vehicle [1:59] Shows original camera and housing removed from the vehicle)
Claim 17. Leo teaches further comprising: suppressing a signal transmission of the at least one environment sensor of the motor vehicle to the control unit. ([1:59] Shows original camera and housing removed from the vehicle. Physically disconnecting the sensor from the vehicle would prevent or “suppress” any signals from the sensor from reaching the control unit)
Leo is analogous art because it is within the field of vehicular sensing. It would have been obvious to one of ordinary skill in the art to combine it with Solmaz, Farabet, and Rasshofer before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to extend the lifetime of a particular test vehicle. As noted by Leo, vehicle sensors have particular lifetimes after which they will need to be replaced in order for them to continue to be used. In particular, Leo notes that the sensor on the vehicle he is working on has a lifetime of only around 4 years ([2:05] “… These cameras usually do fail after about four to five years, some of them even sooner.”) With this in mind, it would be beneficial to be able to replace such sensors and extend the lifetime of test vehicles for as long as possible. To this end, Leo presents a process for replacing such sensors ([0:07] “Hi, so today we’re going to be replacing a camera on a 2012 Ford F-150” [1:28 – 2:00] Shows the removal of the old defective sensor [3:35- 4:14] Show installation of the replacement) Overall, one of ordinary skill in the art would have recognized that combining Leo with Solmaz, Farabet, and Rasshofer allow parts to be replaced in the test vehicles, allowing those vehicles to be used longer and therefore driving down the long term cost of testing.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael P Mirabito whose telephone number is (703)756-1494. The examiner can normally be reached M-F 10:30 am - 6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached at (571) 272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.P.M./Examiner, Art Unit 2187
/EMERSON C PUENTE/Supervisory Patent Examiner, Art Unit 2187