Google’s Project Genie Makes Real-time Explorable Virtual Worlds, Offering a Peek Into VR’s Future

DeepMind, Google’s AI research lab, announced Genie 3 last August, showing off an AI system capable of generating interactive virtual environments in real-time. Now, Google has released an experimental prototype that Google AI subscribers can try today. Granted, you can’t generate VR world on the fly just yet, but we’re getting tantalizingly close.

The News

Project Genie is what Google calls it an “experimental research prototype,” so it isn’t exactly the ‘AI game machine’ of your dreams just yet. Essentially, it allows users to create, explore, and modify interactive virtual environments through a web interface.

The system is a lot like previous image and video generators, which require inputting a text prompt and/or uploading reference images, although Project Genie takes this a few steps further.

Instead of one, Project Genie has two main prompt boxes—one for the environment and one for the character. A third prompt box also allows you to modify the initial look before fully generating the environment (e.g.. make the sword bigger, change the trees to fall time).

As an early research system, Project Genie has limitations, Google says in a blog post.  Generated environments may not closely match real-world physics or prompts, character control can be inconsistent, sessions are limited to 60 seconds, and some previously announced features are not yet included.

And for now, the only thing you can output is a video of the experience, although you can explore and remix other ‘worlds’ available in the gallery.

Project Genie is now rolling out to Google AI Ultra subscribers in the US, aged 18 and over, with broader availability planned to release at some point in the future. You can find out more here.

My Take

There are a lot of hurdles to get over before we can see anything like Project Genie running on a VR headset.

One of the most important hurdles to get over is undoubtedly cloud streaming. Frankly, cloud gaming exists on VR headsets, but it’s not great right now since latency is so variable based on how close you are to your service’s data center. That, and the big names in cloud gaming today (i.e. NVIDIA GeForce Now, Xbox Cloud Gaming) are generally geared towards flatscreen games; when it comes to render and input latency, the bar is much lower than VR headsets, which generally require a maximum of 20ms motion-to-photon latency to avoid user discomfort.

And that’s also not taking into account that Project Genie would need to also somehow render the world with stereoscopy in mind—which may present its own problems since the system would technically need two distinct points of view that resolve into a single, solid 3D picture.

As far as I understand, world models created in Project Genie are probabilistic, i.e. objects can behave slightly different each time, which is part of the reason Genie 3 can only support a maximum of few minutes of continuous interaction at a time. Genie 3 world generation has a tendency to drift from prompts, which probably gives undesired results.

So while it’s unlikely we’ll see a VR version of this in the very near future, I’m excited to see the baby steps leading to where it could eventually go. The thought of being able to casually order up a world on the fly Holodeck-style that I can explore—be it past, present, or any fiction of my chooseing—feels so much more interesting to me from a learning perspective. One of my most-used VR apps to date is Google Earth VR, and I can only imagine a more detailed and vibrant version of that to help me learn foreign languages, time travel, and tour the world virtually.

Before we even get that far though, there’s a distinct possibility that the Internet will be overrun by ‘game slop’, which feels like asset flipping taken to the extreme. It will also likely expose game developers to the same struggles that other digital artists are facing right now when it comes to AI sampling and recreating copyrighted works—albeit on a whole new level (GTA VI anyone?).

That, and I can’t shake the feeling that the future is shaping up be a very odd, but hopefully also a very interesting and not entirely terrible place. I can imagine a future wherein photorealistic, AI-driven environments go hand-in-hand with brain-computer interfaces (BCI)—two subjects Valve has been researching for years—and serving up The Virtual Reality I’m actually waiting for.

The post Google’s Project Genie Makes Real-time Explorable Virtual Worlds, Offering a Peek Into VR’s Future appeared first on Road to VR.

XREAL Rolls out Automatic Real-time 3D Conversion Feature for Its AR Glasses

XREAL has rolled out a real-time 3D conversion feature to its flagship AR glasses, which the company says converts any 2D content to 3D.

Xreal initially launched its ‘Real 3D’ software on Xreal 1S AR glasses earlier this month, however now the company has rolled out an update to Xreal One and One Pro that brings optional real-time 3D conversion to 2D content.

The company says Real 3D doesn’t require special video files, apps, DRM-protected content, or external software. All of the conversion is done in real-time on device via the company’s X1 spatial computing chipset built into the One series glasses.

XREAL One Pro | Image courtesy XREAL

“Because it doesn’t depend on proprietary players or formats, Real 3D works across connected desktops, consoles, phones, and other devices,” the company says, noting that content includes movies, streaming videos, locally stored media, and games.

Xreal tells Road to VR it does this by using the X1 chip’s NPU (neural processing unit) to perform depth estimation inference on every incoming frame and to generate the corresponding left- and right-eye views with depth relationships.

The company says it’s still investigating Real 3D’s latency. Notably, the company says that when compared to other display modes, its real-time 3D conversion results in “slightly higher power consumption,” something Xreal says is around 300mW.

Additionally, Xreal tells Road to VR that its Real 3D technology is entirely developed in-house.

“We trained a highly compact model that balances performance and power consumption specifically for integrating into the X1 chip. While real-time 3D conversion is relatively straightforward on high-end GPUs, we have not found any comparable solutions in the industry that can operate effectively on low-power platforms like X1.”

The Beijing-based AR glasses maker sells a fairly wide range of AR glasses, all of which  target traditional content consumption, such as flatscreen games, TV and film running on its own Android-based operating system.

Alongside the announcement it had secured a $100 million financing round, Xreal also recently became Google’s lead AR partner following a multi-year extension of an agreement initially initially started in late 2024.

As a result, Xreal aims to bring Google’s Android XR operating system to its AR glasses over the next few years, which is slated to kick off with Xreal’s Project Aura when it launches at some point this year. In the meantime, you can check out our recent hands-on with Project Aura here.

The post XREAL Rolls out Automatic Real-time 3D Conversion Feature for Its AR Glasses appeared first on Road to VR.

Thief VR: Legacy of Shadow Refines Gameplay Mechanics In Latest Update

In its fourth major update since release, Thief VR: Legacy of Shadow refines its gameplay mechanics for a smoother experience.

Available now on all major platforms, Thief VR: Legacy of Shadow launched its 4.0 update, focusing on refining the gameplay experience for an overall smoother feel. As its 3.0 patch was released just shy of two weeks ago, it is clear developer Maze Theory and publisher Vertigo Games are on top of things, quick to apply any feedback shared to deliver a better game. Other improvements include more flexible customization options for the Steam version, such as higher-quality dynamic shadows and character models, and general quality-of-life bug fixes.

One of the flagship upgrades to this new patch is revamped crouch mechanics. As a marquee ability, players are supposed to spend a lot of time doing so while hiding in the shadows. While never broken since its initial release, it did feel that certain aspects of the game could have done with more time in the oven, as we mentioned in our review: “Sometimes objects fail to load in properly, like a treasure chest going transparent whenever I face it from the front—or an entire basement visually deloading momentarily if I walk too close to an adjoining wall.”

0:00

/0:38

A gameplay video recorded by UploadVR showcasing patch 4.0.

Previous upgrades mainly brought visual improvements and continued stability to the experience. No DLC or sequel has been mentioned as yet, but this ongoing support is at least a step in the right direction.

Thief VR: Legacy of Shadow is available now on Meta Quest, PlayStation VR2, and Steam.