DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. This office action is in response to application number 18/422,694 filed on 01/25/2024, in which claims 1-20 are presented for examination.
Information Disclosure Statement
3. The information disclosure statement (IDS) submitted on 07/05/2024 have been received and considered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
4. Claim(s) 1, 3-6, 8, 10-12, and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable in view of (US 11257390 B2) to Wickman et al. (hereinafter Wickman) in view of (US 20190303759 A1) to Farabet et al. (hereinafter Farabet).
Regarding claim 1, Wickman discloses One or more processors comprising one or more processing units to: receive simulation data that at least partially represents one or more portions of an interior of a […] that is a virtual representation of a […] (Wickman Column 3, line number 1-7: “Thus, upon in real-time superimposing a virtual representation of a simulated vehicle design feature—such as e.g. a complete interior and parts of an exterior or e.g. at least a simulated dashboard or a portion thereof—on the surrounding-showing video stream, said simulated vehicle design feature e.g. simulated dashboard may be evaluated in a road-driven vehicle, e.g. while said road-driven vehicle is driven in real traffic and/or along actual roads.”) wherein the one or more portions of the interior of […] include one or more virtual display devices simulating one or more real-world display devices; (Wickman Column 15, line number1-6: “The simulated vehicle functionality feature here comprises an exemplifying simulated vehicle display, more specifically a simulated infotainment display, and the virtual functionality feature representation 8 is accordingly here represented by a virtual infotainment display.”) receive first display data generated by a hardware component of […]; and based at least on the receiving of the first display data generated by the hardware component of […], cause display of the first display data in the virtual representation at the one or more virtual display devices of the […]. (Wickman Column 11, line number 39-45: “The vehicle signal may be derived from the road-driven vehicle in any arbitrary known manner, for instance via wired and/or wireless communication therewith, and/or with support from a vehicle signal determining system and/or unit adapted for determining which input derived from the road-driven vehicle affects the simulated vehicle functionality feature.”) (Wickman Column 14, line number 42-50: “The HMD 4 comprises at least one HMD display 41, here two displays, on which it is displayed a real-time surrounding-showing video stream 6 derived from real-world image data 211 (shown in FIGS. 2-3) captured with support from the one or more vehicle-attached cameras 21. Superimposed on said video stream 6 is a virtual representation 7 of a simulated vehicle design feature to be evaluated in the road-driven vehicle 2.”) (Wickman Column 15, line number 1-6: “The simulated vehicle functionality feature here comprises an exemplifying simulated vehicle display, more specifically a simulated infotainment display,”)
Wickman does not teach virtual ego-machine […] real-world ego-machine.
However, Farabet does teach virtual ego-machine […] real-world ego-machine (Farabet Paragraph 0028: “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments.”) (Farabet Paragraph 0028:” The vehicle(s) 102 may include autonomous vehicles”) (Farabet Paragraph 0078: “the virtual vehicle that may correspond to the vehicle simulator component(s) 406 within the simulation system 400 may be modeled as a game object within an instance of a game engine.”) (Farabet Paragraph 0079: “Using HIL objects in the simulator system 400 may provide for a scalable solution that may simulate or emulate various driving conditions for autonomous software”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include virtual ego-machine […] real-world ego-machine taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 3, Wickman discloses The one or more processors of claim 1, wherein the simulation data includes a representation of a lighting characteristic according to a placement of a virtual source of illumination within the interior […], and wherein the display, at the one or more virtual display devices […] of the first display data is based at least on the representation of the lighting characteristic according to the placement of the virtual source of illumination within the interior […]. (Wickman Column 8, line number 1-4: “On the other hand, should the simulated vehicle functionality feature additionally or alternatively comprise a simulated vehicle display, then a virtual vehicle display e.g. an infotainment display may be evaluated in the road-driven vehicle,”) (Wickman Column 8, line number 11-17: “Should the simulated vehicle functionality feature on the other hand additionally or alternatively comprise simulated light characteristics, then virtual light characteristics e.g. interior light characteristics may be evaluated in the road-driven vehicle, such as e.g. new and/or updated appearance and/or ambience of e.g. interior lighting.”) (Wickman Column 8,line number 34-39: “The expression that the simulated vehicle functionality feature “comprises” may refer to that the simulated vehicle functionality feature “is represented by”, whereas light “characteristics” may refer to light “position(s)”, “positioning”, “function(s)”, “ambience” and/or “appearance”.”)
Wickman does not teach […] of the virtual ego-machine
However, Farabet does teach […] of the virtual ego-machine (Farabet Paragraph 0078: “the virtual vehicle that may correspond to the vehicle simulator component(s) 406 within the simulation system 400 may be modeled as a game object within an instance of a game engine.”) (Farabet Paragraph 0079: “Using HIL objects in the simulator system 400 may provide for a scalable solution that may simulate or emulate various driving conditions for autonomous software”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] of the virtual ego-machine taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 4, Wickman discloses The one or more processors of claim 1, wherein the simulation data includes a representation of a scene configuration outside […] and wherein the one or more processing units are further to: (Wickman Column 1, line number 62-Column 2, line number 3: “Moreover, the vehicle feature evaluation system provides in real-time to a HMD display of the HMD, taking into consideration the HMD orientation, a virtual representation of the simulated vehicle design feature superimposed on a real-time surrounding-showing video stream derived from real-world image data captured with support from one or more vehicle-attached cameras adapted to capture surroundings external to the road-driven vehicle.”)
Wickman does not teach […] of the virtual ego-machine, […] predict, based on the representation of the scene configuration, virtual sensor data representative of the one or more portions outside of the virtual ego-machine; and trigger the hardware component to generate real-world video data based at least on processing the virtual sensor data, and wherein the display, at the one or more virtual display devices of the virtual ego-machine, of the first display data includes the real-world video data based at least on the prediction of the virtual sensor data.
However, Farabet does teach […] of the virtual ego-machine, […] predict, based on the representation of the scene configuration, virtual sensor data representative of the one or more portions outside of the virtual ego-machine; (Farabet Paragraph 0044: “For example, a pre-trained DNN may be used to compute a score for each new frame selected where the score may represent a confidence in the prediction of the DNN.”) (Farabet Paragraph 0122: “The method 1000, at block B1010, includes computing an output by the trained machine learning model. For example, the trained DNN may compute one or more outputs using the virtual sensor data. As described herein, the virtual sensor data may be encoded prior to use by the trained DNN.”) (Farabet Paragraph 0123: “The method 1000, at block B1020, includes controlling a virtual object within a simulated environment based at least in part on the output. For example, the virtual object (e.g., virtual vehicle) may be controlled within the simulated environment based at least in part on the output.”) and trigger the hardware component to generate real-world video data based at least on processing the virtual sensor data, and wherein the display, at the one or more virtual display devices of the virtual ego-machine, of the first display data includes the real-world video data based at least on the prediction of the virtual sensor data. (Farabet Paragraph 0129: “The controller(s) 1136 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving the vehicle 102. The controller(s) 1136 may include a first controller 1136 for autonomous driving functions, a second controller 1136 for functional safety functions, a third controller 1136 for artificial intelligence functionality (e.g., computer vision), a fourth controller 1136 for infotainment functionality,”) (Farabet Paragraph 0222: “The infotainment SoC 1130 may further be used to provide information (e.g., visual and/or audible) to a user(s) of the vehicle, such as information from the ADAS system 1138, autonomous driving information such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.)”) (Farabet Paragraph 0227: “The training data may be generated by the vehicles, and/or may be generated in a simulation (e.g., using a game engine). In some examples, the training data is tagged (e.g., where the neural network benefits from supervised learning) and/or undergoes other pre-processing, while in other examples the training data is not tagged and/or pre-processed (e.g., where the neural network does not require supervised learning). Once the machine learning models are trained, the machine learning models may be used by the vehicles (e.g., transmitted to the vehicles over the network(s) 1190, and/or the machine learning models may be used by the server(s) 1178 to remotely monitor the vehicles.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] of the virtual ego-machine, […] predict, based on the representation of the scene configuration, virtual sensor data representative of the one or more portions outside of the virtual ego-machine; and trigger the hardware component to generate real-world video data based at least on processing the virtual sensor data, and wherein the display, at the one or more virtual display devices of the virtual ego-machine, of the first display data includes the real-world video data based at least on the prediction of the virtual sensor data taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 5, Wickman discloses The one or more processors of claim 1, wherein the one or more processing units are further to: receive user input made at the one or more virtual display devices that simulate one or more real-world display devices; and in response to the receiving of the user input, trigger the hardware component to cause the display, at the one or more virtual display devices, of the first display data based at least on the receiving of user input and interpreting the user input as a touch input. (Wickman Column 10, line number 45-48: “That is, user interaction with the simulated vehicle functionality feature, which feature for instance is represented by selectable options available on a simulated vehicle display,”) (Wickman Column 10, line number 55-Column 11, line number 1: “Thereby, presence of a user, e.g. the HMD-wearing occupant, and/or e.g. a finger of said user/occupant, may be sensed in or at the position in the vehicle representing the location of the simulated vehicle functionality feature, whereby the virtual representation of the simulated vehicle functionality feature subsequently may be updated in accordance with said presence, and/or in accordance with the geographical position and/or the nature of said presence. User interaction may be detected in any arbitrary manner known in the art, e.g. by means of one or more user interaction sensors and/or a user interaction determining system, for instance comprising touch sensor(s), camera(s) and/or position detection sensor(s) worn by the user e.g. on his/her hand and/or finger.”)
Regarding claim 6, Wickman in view of Farabet does teach claim 1, accordingly, the rejection of claim 1 is incorporated above.
Wickman does not teach The one or more processors of claim 1, wherein the one or more processing units are further to: receive a user input made at a video game controller; and in response to the receiving of the user input made at the video game controller, trigger the hardware component to cause the display, at the one or more virtual display devices, of a video game feed that includes the first display data.
However, Farabet does teach The one or more processors of claim 1, wherein the one or more processing units are further to: receive a user input made at a video game controller; and in response to the receiving of the user input made at the video game controller, trigger the hardware component to cause the display, at the one or more virtual display devices, of a video game feed that includes the first display data. (Farabet Paragraph 0129: “The controller(s) 1136 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving”) (Farabet Paragraph 0145: “The controller(s) 1136 may be coupled to any of the various other components and systems of the vehicle 102, and may be used for control of the vehicle 102, artificial intelligence of the vehicle 102, infotainment for the vehicle 102, and/or the like.”) (Farabet Paragraph 0232: “FIG. 12 is a block diagram of an example computing device 1200 suitable for use in implementing some embodiments of the present disclosure.”) (Farabet Paragraph 0233: “In other words, the computing device of FIG. 12 is merely illustrative. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “desktop,” “tablet,” “client device,” “mobile device,” “hand-held device,” “game console,”) (Farabet Paragraph 0242: “The I/O ports 1212 may enable the computing device 1200 to be logically coupled to other devices including the I/O components 1214, the presentation component(s) 1218, and/or other components, some of which may be built in to (e.g., integrated in) the computing device 1200. Illustrative I/O components 1214 include a microphone, mouse, keyboard, joystick, game pad, game controller, satellite dish, scanner, printer, wireless device, etc.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include The one or more processors of claim 1, wherein the one or more processing units are further to: receive a user input made at a video game controller; and in response to the receiving of the user input made at the video game controller, trigger the hardware component to cause the display, at the one or more virtual display devices, of a video game feed that includes the first display data taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 8, Wickman discloses The one or more processors of claim 1, wherein the hardware component includes In-Vehicle Infotainment hardware (IVI), and wherein the one or more virtual display devices simulate a real-world infotainment device […] (Wickman Column 15, line number 1-12: “The simulated vehicle functionality feature here comprises an exemplifying simulated vehicle display, more specifically a simulated infotainment display, and the virtual functionality feature representation 8 is accordingly here represented by a virtual infotainment display. The simulated vehicle functionality feature may have a fictive functionality feature location 80 relative the vehicle 2, and for the exemplifying simulated infotainment display in FIGS. 1-3, the fictive functionality feature location 80 is in an exemplifying manner essentially centered on a dashboard of the road-driven vehicle 2 and/or essentially centered on the virtual dashboard 7.”) (Note: If a infotainment display is used it is obvious in vehicle infotainment hardware is also being used)
Wickman does not teach […] of the real-world ego-machine.
However, Farabet does teach […] of the real-world ego-machine. (Farabet Paragraph 0028: “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments.”) (Farabet Paragraph 0028:” The vehicle(s) 102 may include autonomous vehicles”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] of the real-world ego-machine taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 10, Wickman discloses The one or more processors of claim 1, wherein the simulation data is further representative of at least one of, a user interface design tool for the one or more real-world display devices […] tool for designing one or more portions of an interior portion (Wickman Column 6, line number 9-13: “According to an example, “representation” may further refer to “graphical model”, “computer-aided design, CAD, geometrics and/or design surface model, DSM, geometrics” and/or “result from computer-aided engineering, CAE”) (Wickman Column 7, line number 10-16: “Thereby, a virtual vehicle interior portion—e.g. a simulated dashboard—and/or a virtual vehicle exterior portion—e.g. a simulated hood—may be evaluated in the road-driven vehicle, such as e.g. new and/or updated design, colour, material and/or user interface thereof and/or e.g. geometrics for instance CAD and/or DSM geometrics thereof.”) (Wickman Column 7, line number 30-31: “a virtual representation of the simulated vehicle functionality feature”)
Wickman does not disclose […] of the real-world ego-machine or an ego-machine design […] of the real-world ego-machine.
However, Farabet does teach […] of the real-world ego-machine or an ego-machine design […] of the real-world ego-machine. (Farabet Paragraph 0028: “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments.”) (Farabet Paragraph 0028:” The vehicle(s) 102 may include autonomous vehicles”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] of the real-world ego-machine or an ego-machine design […] of the real-world ego-machine taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 11, Wickman discloses The one or more processors of claim 1, wherein the one or more processors is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for performing real-time streaming; (Wickman Column 1, line number 62-Column 2, line number 3: “Moreover, the vehicle feature evaluation system provides in real-time to a HMD display of the HMD, taking into consideration the HMD orientation, a virtual representation of the simulated vehicle design feature superimposed on a real-time surrounding-showing video stream derived from real-world image data captured with support from one or more vehicle-attached cameras adapted to capture surroundings external to the road-driven vehicle.”) a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system for generating synthetic data using AI; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources.
Regarding claim 12, Wickman discloses A system comprising one or more processing units to: receive simulation data representing at least one of, one or more portions of an interior […] one or more portions external […], or a user input associated with one or more virtual display devices, wherein […] includes the one or more virtual display devices; (Wickman Column 3, line number 1-7: “Thus, upon in real-time superimposing a virtual representation of a simulated vehicle design feature—such as e.g. a complete interior and parts of an exterior or e.g. at least a simulated dashboard or a portion thereof—on the surrounding-showing video stream, said simulated vehicle design feature e.g. simulated dashboard may be evaluated in a road-driven vehicle, e.g. while said road-driven vehicle is driven in real traffic and/or along actual roads.”) (Wickman Column 15, line number1-6: “The simulated vehicle functionality feature here comprises an exemplifying simulated vehicle display, more specifically a simulated infotainment display, and the virtual functionality feature representation 8 is accordingly here represented by a virtual infotainment display.”) […] and based at least in part on receiving the response data via the hardware component that is capable of controlling the one or more functions of the one or more real-world devices […] cause the one or more virtual display devices […] to present first display data. (Wickman Column 11, line number 39-45: “The vehicle signal may be derived from the road-driven vehicle in any arbitrary known manner, for instance via wired and/or wireless communication therewith, and/or with support from a vehicle signal determining system and/or unit adapted for determining which input derived from the road-driven vehicle affects the simulated vehicle functionality feature.”) (Wickman Column 14, line number 42-50: “The HMD 4 comprises at least one HMD display 41, here two displays, on which it is displayed a real-time surrounding-showing video stream 6 derived from real-world image data 211 (shown in FIGS. 2-3) captured with support from the one or more vehicle-attached cameras 21. Superimposed on said video stream 6 is a virtual representation 7 of a simulated vehicle design feature to be evaluated in the road-driven vehicle 2.”) (Wickman Column 15, line number 1-6: “The simulated vehicle functionality feature here comprises an exemplifying simulated vehicle display, more specifically a simulated infotainment display,”)
Wickman does not teach […] of a virtual ego-machine, […] transmit the simulation data to one or more network devices to generate, via a hardware component, response data based at least on a mapping of the simulation data into one or more values that are processed by the hardware component, wherein the hardware component is capable of controlling one or more functions of one or more real-world devices of a real-world ego-machine; […] of the virtual ego-machine.
However, Farabet does teach […] of a virtual ego-machine, […] (Farabet Paragraph 0078: “the virtual vehicle that may correspond to the vehicle simulator component(s) 406 within the simulation system 400 may be modeled as a game object within an instance of a game engine.”) (Farabet Paragraph 0079: “Using HIL objects in the simulator system 400 may provide for a scalable solution that may simulate or emulate various driving conditions for autonomous software”) […] transmit the simulation data to one or more network devices to generate, via a hardware component, response data based at least on a mapping of the simulation data into one or more values that are processed by the hardware component, (Farabet Paragraph 0077: “For example, a CAN interface, LVDS interface, USB interface, Ethernet interface, InfiniB and (IB) interface, and/or other interfaces may be used by the vehicle hardware 104 to communicate signals with other components of the physical vehicle. As such, in the simulation system 400, the vehicle simulator component(s) 406 (and/or other component(s) of the simulation system 400 in addition to, or alternative from, the vehicle simulator component(s) 406) may need to be configured for use with the vehicle hardware 104.”) (Farabet Paragraph 0123: “As such, where the trained DNNs suffer, fine-tuning may be executed to improve, validate, and verify the DNNs prior to deployment of the DNNs in real-world, physical vehicles (e.g., the vehicle 102).”) (Farabet Paragraph 0222: “The vehicle 102 may further include the infotainment SoC 1130 (e.g., an in-vehicle infotainment system (IVI)”) (Farabet Paragraph 0227: “The training data may be generated by the vehicles, and/or may be generated in a simulation (e.g., using a game engine). In some examples, the training data is tagged (e.g., where the neural network benefits from supervised learning) and/or undergoes other pre-processing, while in other examples the training data is not tagged and/or pre-processed (e.g., where the neural network does not require supervised learning). Once the machine learning models are trained, the machine learning models may be used by the vehicles (e.g., transmitted to the vehicles over the network(s) 1190, and/or the machine learning models may be used by the server(s) 1178 to remotely monitor the vehicles.”) wherein the hardware component is capable of controlling one or more functions of one or more real-world devices of a real-world ego-machine; (Farabet Paragraph 0029: “ In any example, the vehicle hardware 104 may include the hardware of the vehicle 102 that is used to control the vehicle 102 through real-world environments based on the sensor data,”) […] of the real-world ego-machine, (Farabet Paragraph 0028: “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments.”) (Farabet Paragraph 0028:” The vehicle(s) 102 may include autonomous vehicles”) […] of the virtual ego-machine. (Farabet Paragraph 0078: “the virtual vehicle that may correspond to the vehicle simulator component(s) 406 within the simulation system 400 may be modeled as a game object within an instance of a game engine.”) (Farabet Paragraph 0079: “Using HIL objects in the simulator system 400 may provide for a scalable solution that may simulate or emulate various driving conditions for autonomous software”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] of a virtual ego-machine, […] transmit the simulation data to one or more network devices to generate, via a hardware component, response data based at least on a mapping of the simulation data into one or more values that are processed by the hardware component, wherein the hardware component is capable of controlling one or more functions of one or more real-world devices of a real-world ego-machine; […] of the virtual ego-machine taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 14, Wickman discloses The system of claim 12, wherein the simulation data includes a representation of a lighting characteristic according to a placement of a virtual source of illumination within the interior […] and wherein a display, at the one or more virtual display devices […] of the first display data, is based at least on the representation of the lighting characteristic according to the placement of the virtual light inside […]. (Wickman Column 8, line number 1-4: “On the other hand, should the simulated vehicle functionality feature additionally or alternatively comprise a simulated vehicle display, then a virtual vehicle display e.g. an infotainment display may be evaluated in the road-driven vehicle,”) (Wickman Column 8, line number 11-17: “Should the simulated vehicle functionality feature on the other hand additionally or alternatively comprise simulated light characteristics, then virtual light characteristics e.g. interior light characteristics may be evaluated in the road-driven vehicle, such as e.g. new and/or updated appearance and/or ambience of e.g. interior lighting.”) (Wickman Column 8,line number 34-39: “The expression that the simulated vehicle functionality feature “comprises” may refer to that the simulated vehicle functionality feature “is represented by”, whereas light “characteristics” may refer to light “position(s)”, “positioning”, “function(s)”, “ambience” and/or “appearance”.”)
Wickman does not teach […] of the virtual ego-machine
However, Farabet does teach […] of the virtual ego-machine (Farabet Paragraph 0078: “the virtual vehicle that may correspond to the vehicle simulator component(s) 406 within the simulation system 400 may be modeled as a game object within an instance of a game engine.”) (Farabet Paragraph 0079: “Using HIL objects in the simulator system 400 may provide for a scalable solution that may simulate or emulate various driving conditions for autonomous software”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] of the virtual ego-machine taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 15, Wickman discloses The system of claim 12, wherein the simulation data includes a representation of a scene configuration outside […] and wherein the one or more processing units are further to: (Wickman Column 1, line number 62-Column 2, line number 3: “Moreover, the vehicle feature evaluation system provides in real-time to a HMD display of the HMD, taking into consideration the HMD orientation, a virtual representation of the simulated vehicle design feature superimposed on a real-time surrounding-showing video stream derived from real-world image data captured with support from one or more vehicle-attached cameras adapted to capture surroundings external to the road-driven vehicle.”)
Wickman does not teach […] of the virtual ego-machine, […] predict, based on the representation of the scene configuration, virtual sensor data representative of the one or more portions external to the virtual ego-machine; and trigger the hardware component to generate real-world video data based at least on processing the virtual sensor data, and wherein a display, at the one or more virtual display devices of the virtual ego-machine, of the first display data includes the real-world video data based at least on the prediction of the virtual sensor data.
However, Farabet does teach […] of the virtual ego-machine, […] predict, based on the representation of the scene configuration, virtual sensor data representative of the one or more portions external to the virtual ego-machine; (Farabet Paragraph 0044: “For example, a pre-trained DNN may be used to compute a score for each new frame selected where the score may represent a confidence in the prediction of the DNN.”) (Farabet Paragraph 0122: “The method 1000, at block B1010, includes computing an output by the trained machine learning model. For example, the trained DNN may compute one or more outputs using the virtual sensor data. As described herein, the virtual sensor data may be encoded prior to use by the trained DNN.”) (Farabet Paragraph 0123: “The method 1000, at block B1020, includes controlling a virtual object within a simulated environment based at least in part on the output. For example, the virtual object (e.g., virtual vehicle) may be controlled within the simulated environment based at least in part on the output.”) and trigger the hardware component to generate real-world video data based at least on processing the virtual sensor data, and wherein a display, at the one or more virtual display devices of the virtual ego-machine, of the first display data includes the real-world video data based at least on the prediction of the virtual sensor data. (Farabet Paragraph 0129: “The controller(s) 1136 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving the vehicle 102. The controller(s) 1136 may include a first controller 1136 for autonomous driving functions, a second controller 1136 for functional safety functions, a third controller 1136 for artificial intelligence functionality (e.g., computer vision), a fourth controller 1136 for infotainment functionality,”) (Farabet Paragraph 0222: “The infotainment SoC 1130 may further be used to provide information (e.g., visual and/or audible) to a user(s) of the vehicle, such as information from the ADAS system 1138, autonomous driving information such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.)”) (Farabet Paragraph 0227: “The training data may be generated by the vehicles, and/or may be generated in a simulation (e.g., using a game engine). In some examples, the training data is tagged (e.g., where the neural network benefits from supervised learning) and/or undergoes other pre-processing, while in other examples the training data is not tagged and/or pre-processed (e.g., where the neural network does not require supervised learning). Once the machine learning models are trained, the machine learning models may be used by the vehicles (e.g., transmitted to the vehicles over the network(s) 1190, and/or the machine learning models may be used by the server(s) 1178 to remotely monitor the vehicles.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] of the virtual ego-machine, […] predict, based on the representation of the scene configuration, virtual sensor data representative of the one or more portions external to the virtual ego-machine; and trigger the hardware component to generate real-world video data based at least on processing the virtual sensor data, and wherein a display, at the one or more virtual display devices of the virtual ego-machine, of the first display data includes the real-world video data based at least on the prediction of the virtual sensor data taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 16, Wickman discloses The system of claim 12, wherein the one or more processing units are further to: receive user input made at the one or more virtual display devices that simulate one or more real-world display devices; and in response to the receiving of the user input, trigger the hardware component to cause display, at the one or more virtual display devices, of the first display data based at least on the receiving of user input and interpreting the user input as a touch input. (Wickman Column 10, line number 45-48: “That is, user interaction with the simulated vehicle functionality feature, which feature for instance is represented by selectable options available on a simulated vehicle display,”) (Wickman Column 10, line number 55-Column 11, line number 1: “Thereby, presence of a user, e.g. the HMD-wearing occupant, and/or e.g. a finger of said user/occupant, may be sensed in or at the position in the vehicle representing the location of the simulated vehicle functionality feature, whereby the virtual representation of the simulated vehicle functionality feature subsequently may be updated in accordance with said presence, and/or in accordance with the geographical position and/or the nature of said presence. User interaction may be detected in any arbitrary manner known in the art, e.g. by means of one or more user interaction sensors and/or a user interaction determining system, for instance comprising touch sensor(s), camera(s) and/or position detection sensor(s) worn by the user e.g. on his/her hand and/or finger.”)
Regarding claim 17, Wickman discloses The system of claim 12, wherein the hardware component includes at least one of, an In-Vehicle Infotainment hardware (IVI) (Wickman Column 15, line number 1-12: “The simulated vehicle functionality feature here comprises an exemplifying simulated vehicle display, more specifically a simulated infotainment display, and the virtual functionality feature representation 8 is accordingly here represented by a virtual infotainment display. The simulated vehicle functionality feature may have a fictive functionality feature location 80 relative the vehicle 2, and for the exemplifying simulated infotainment display in FIGS. 1-3, the fictive functionality feature location 80 is in an exemplifying manner essentially centered on a dashboard of the road-driven vehicle 2 and/or essentially centered on the virtual dashboard 7.”) (Note: If a infotainment display is used it is obvious in vehicle infotainment hardware is also being used)
Wickman does not teach […] and a cockpit Electric Control Unit (ECU).
However, Farabet does teach […] and a cockpit Electric Control Unit (ECU). (Farabet Paragraph 0224: “ In some examples, information may be displayed and/or shared among the infotainment SoC 1130 and the instrument cluster 1132. In other words, the instrument cluster 1132 may be included as part of the infotainment SoC 1130, or vice versa.”) (Note: Since the infotainment SOC that includes part of the instrument cluster are combined a cockpit ECU is used)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Wickman to include […] and a cockpit Electric Control Unit (ECU) taught by Farabet. This would have been for the benefit to provide vehicle hardware configured for installation within an autonomous vehicle may be used to execute the software stack(s) within the simulated environment. In addition, the virtual sensor data may be encoded to a format that is familiar to the software stack(s) (e.g., is bit-to-bit the same as the physical sensor data used for training the DNNs). As a result, the testing, training, verification, and/or validation of the DNNs may be substantially identical to employing the hardware/software components in a physical vehicle in a real-world environment. [Farabet Paragraph 0007]
Regarding claim 18, Wickman discloses The system of claim 12, wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for performing real-time streaming; (Wickman Column 1, line number 62-Column 2, line number 3: “Moreover, the vehicle feature evaluation system provides in real-time to a HMD display of the HMD, taking into consideration the HMD orientation, a virtual representation of the simulated vehicle design feature superimposed on a real-time surrounding-showing video stream derived from real-world image data captured with support from one or more vehicle-attached cameras adapted to capture surroundings external to the road-driven vehicle.”) a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system for generating synthetic data using AI; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources.
Regarding claim 19, Wickman discloses A method comprising: receiving simulation data representing at least one of, one or more portions within […], one or more portions external […] or a user input associated with one or more virtual display devices; (Wickman Column 3, line number 1-7: “Thus, upon in real-time superimposing a virtual representation of a simulated vehicle design feature—such as e.g. a complete interior and parts of an exterior or e.g. at least a simulated dashboard or a portion thereof—on the surrounding-showing video stream, said simulated vehicle design feature e.g. simulated dashboard may be evaluated in a road-driven vehicle, e.g. while said road-driven vehicle is driven in real traffic and/or along actual roads.”) (Wickman Column 15, line number1-6: “The simulated vehicle functionality feature here comprises an exemplifying simulated vehicle display, more specifically a simulated infotainment display, and the virtual functionality feature representation 8 is accordingly here represented by a virtual infotainment display.”) and based at least in part on receiving the response data via the hardware component that is capable of controlling the one or more functions of the one or more real-world devices […], causing display of first display data representing one or more portions of an interior […] (Wickman Column 11, line number 39-45: “The vehicle signal may be derived from the road-driven vehicle in any arbitrary known manner, for instance via wired and/or wireless communication therewith, and/or with support from a vehicle signal determining system and/or unit adapted for determining which input derived from the road-driven vehicle affects the simulated vehicle functionality feature.”) (Wickman Column 14, line number 42-50: “The HMD 4 comprises at least one HMD display 41, here two displays, on which it is displayed a real-time surrounding-showing video stream 6 derived from real-world image data 211 (shown in FIGS. 2-3) captured with support from the one or more vehicle-attached cameras 21