Google is working on an augmented reality headset, known as Project Iris, with a tentative target launch of 2024.
The Verge disclosed the project, citing two people familiar with the headset, and added that due to power constraints the system will rely on data centers to operate.
Details are limited, with the publication saying that Google plans to "use its data centers to remotely render some graphics and beam them into the headset via an Internet connection."
How this would work is not entirely clear - but Google would likely rely on Edge data centers for some of the lower-latency workloads, and larger facilities for pre-loading content that is less latency sensitive.
Augmented reality - where virtual objects are layered over the real world - requires extremely low latency. Delays in the microseconds can break the illusion, and even cause motion sickness.
Given that the device will have its own inherent latency as the cameras, sensors, onboard compute, and display systems will all require time, the window for offloaded processing will be extremely small.
What that window, or the overall allowable latency, is not yet clear. "There is more research into latency for VR systems than for AR systems, mainly because the technology is oftentimes easier to handle," 2020 research into latency and cybersickness found.
For virtual reality, Oculus' John Carmack found that latency should be below 50ms to feel responsive, but recommended less than 20ms. Prior research found that humans can detect latency below 17ms.
Latency concerns have stymied another form of the virtual world - cloud video game streaming. When Google launched its Stadia service in late 2019, promising to offload game compute to its data centers, the response was mixed.
"The real sticking point is latency," the respected video game tech analysis group Digital Foundry found. "There's no skirting around the fact that Doom Eternal is inherently less fun to play on Stadia."
Others reported fewer issues, likely based on their proximity to a Google data center. The Stadia service has mostly been a failure, but this was primarily due to poor marketing, a small content library, and few features - rival Microsoft has found much more success with its cloud gaming service (although, it too has latency complaints).
Should Google proceed with its Project Iris headset, and offload compute to the cloud, it would follow a similar strategy employed by its mobile division.
For compute-intensive workloads like voice recognition (in Google Assistant) and image recognition (Google Lens), it began by using Google Cloud. However, as mobile compute power increased, it has begun to shift more of those processes back to the local hardware, reducing latency - and costs at its data centers.
Equally, at its data centers, it has invested in hardware specifically for those offloaded workloads. In the early 2010s, the company realized that if all of the world's Android users used voice search for three minutes a day, the company would need to double its data center portfolio just to handle the processing, Wired reports.
As the voice recognition service was based on deep neural networks, the company developed semiconductors specifically for running deep neural networks, called the Tensor Processing Unit, which helped meet the demand surge.
It is not known if Google plans to meet AR compute demands with new bespoke data center hardware, or use its existing enormous data center infrastructure.
It is also not yet clear how heavily Google plans to invest in the Edge, but it has already begun to build out smaller data centers in more regions.
“We've been predominantly in five campuses within EMEA,” Google’s regional director of EMEA data center infrastructure delivery, Paul Henry, told DCD last year.
“And those have been fairly large scale data centers, ranging anywhere from 32MW to 60MW per data center,” with multiple facilities on each campus. “But we're seeing a bit of a shift as to our strategy - scale for us now in the region is really looking at how do we get into all the metros that we need to expand into, and that's happening at a rapid pace.
Google also signed a 5G Edge computing partnership with Verizon for North America, although the telco has similar deals with Amazon Web Services and Microsoft Azure.
The latter cloud company has its own AR initiative, Microsoft HoloLens, which first launched as a pre-production version in 2016. A large and expensive headset, it does much of the compute locally.
Last year, Microsoft won a $22 billion deal to deliver 120,000 custom HoloLens kits to the US military, augmenting the augmented kit with cloud and Edge services. However, the contract was delayed in October.
Amazon is not yet known to be working on an AR headset, although it has built screenless and cameraless Echo Frames to bring its voice assistant to specs.
Apple is developing an ambitious mixed reality (AR and VR) headset, with thousands of employees working on it. Earlier this month, Bloomberg reported that the iPhone maker was delaying the project due to problems with "overheating, cameras, and software." It is a high-spec system, expected to cost at least $2,000. The company is also thought to be working on a cheaper AR-only device, which is still years away.
The current market leader in VR is Oculus, which was acquired by Facebook for $2.3 billion in 2014. After rebranding as Meta last year, Facebook said it would spend more than $10 billion a year on Facebook Reality Labs, developing 'metaverse' equipment - including AR and VR headsets.