Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras, enabling the rapid generation of novel viewpoints in simulation and thereby enriching the data available for policy learning with corner cases that are difficult to capture in the physical world. Here, we present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles. However, the poor photorealism and lack of diverse sensor modalities of existing simulation engines remain key hurdles towards realizing this potential. Download a PDF of the paper titled VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles, by Alexander Amini and 7 other authors Download PDF Abstract:Simulation has the potential to transform the development of robust algorithms for mobile agents deployed in safety-critical scenarios.