Dark Mode Light Mode

What occurs when Waymo runs right into a twister? Or an elephant?


An autonomous vehicle drives down a lonely stretch of highway. Suddenly, a massive tornado appears in the distance. What does the driverless vehicle do next?

This is just one of the scenarios that Waymo can simulate in the “hyper realistic” virtual world that it has just created with help from Google’s DeepMind. Waymo’s World Model is built using Genie 3, Google’s new AI world model that can generate virtual interactive spaces with text or images as prompts. But Genie 3 isn’t just for creating bad knockoffs of Nintendo games; it can also build photorealistic and interactive 3D environments “adapted for the rigors of the driving domain,” Waymo says.

Simulation is a critical component in autonomous vehicle development, enabling developers to test their vehicles in a variety of settings and scenarios, many of which may only come up in the rarest of occasions — without any physical risk of harming passengers or pedestrians. AV companies use these virtual environments to run through a battery of tests, racking up millions — or even billions — of miles in the process, in the hopes of better training their vehicles for any possible “edge case” that they may encounter in the real world.

What kind of edge cases is Waymo testing? In addition to the aforementioned tornado, the company can also simulate a snow-covered Golden Gate Bridge, a flooded suburban cul-de-sac with floating furniture, a neighborhood engulfed in flames, or even an encounter with a rogue elephant. In each scenario, the Waymo robotaxi’s lidar sensors generate a 3D rendering of the surrounding environment, including the obstacle in the road.

“The Waymo World Model can generate virtually any scene—from regular, day-to-day driving to rare, long-tail scenarios—across multiple sensor modalities,” the company says in a blog post.

Waymo says Genie 3 is ideal for creating virtual worlds for its robotaxis, citing three unique mechanisms: driving action control, scene layout control, and language control. Driving action control allows developers to simulate “what if” counterfactuals, while scene layout control enables customization of the road layouts, like traffic signals and other road user behavior. Waymo describes language control as its “most flexible tool” that allows for time-of-day and weather condition adjustment. This is especially helpful if developers are trying to simulate low-light or high-glare conditions, in which the vehicle’s various sensors may have difficulty seeing the road ahead.

The Waymo World Model can also take real-world dashcam footage and transform it into a simulated environment, for the “highest degree of realism and factuality” in virtual testing, the company says. And it can create longer simulated scenes, such as ones that run at 4X speed playback, without sacrificing image quality or computer processing.

“By simulating the ‘impossible,’ we proactively prepare the Waymo Driver for some of the most rare and complex scenarios,” the company says in its blog post.



Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Here's how Roblox's age checks work

Here is how Roblox's age checks work

Next Post

Why colorectal most cancers breaks the immune system’s guidelines