When Apple unveiled its Vision Pro in June 2023, it called the device a “revolutionary spatial computer”. But most tech enthusiasts preferred calling it a ‘mixed reality headset’.
In February, just around the time it launched the product, the iPhone maker went a bit too far when it told developers to not call the Vision Pro “generally as a ‘headset” when describing their apps for the device. It also told them to call their apps “spatial computing apps” — not augmented reality (AR), virtual reality (VR), or mixed reality (MR) apps.
Apple’s forced use of language is an anti-thesis to its own four-decade-old Super Bowl ad that introduced the Macintosh personal computer to the world hooked to television sets by breaking the idea of ‘doublethink’ with ‘newspeak’. It will serve well for Apple to keep the rich history of reality augmenting technology in mind before it forces the use of specific terms on app developers.
So, what is the history of reality-augmenting technology and where did it all start?
A brief history
In the late 1960s, pioneering computer scientist Ivan Sutherland, along with his student, designed the first-ever head-mounted display (HMD). The device consisted of two cathode ray tubes, mounted on either side of the user’s head, enabling the wearer to see a three-dimensional wireframe room. When the user moved, the device would redraw the wireframe room factoring in the user’s changed position and perspective.
The HMD built by Dr. Sutherland was so heavy that it had to be supported by a large overhead beam. It was jocularly called “The Sword of Damocles” because of its sheer size.
On what such a technology could potentially do in future, Dr. Sutherland, in his paper, wrote that “displayed material can be made either to hang disembodied in space or to coincide with maps, desktops, walls, or the keys of a typewriter.”
But, it took another two decades for HMDs to move in that direction when some large industrial companies began testing the nascent technology to improve productivity on the shop floor and cut training costs. Boeing was one of the first few companies to test and use HMDs to train its shop floor workers.
Testing waters
In 1990, the aircraft manufacturer was looking to cut employee training costs. Making aircraft was — and is even now — a complex task. And most processes are still not automated as they require high levels of dexterity and perceptual awareness.
Back then, the company used diagrams and marking devices to guide workers in the factory to put parts together. Workers had to access a large amount of information, much of it coming from CAD systems, to assemble aircraft.
When engineering designs were updated to factor in new technical changes, it take more time to pass that information along, delaying the already expensive process.
Two Boeing researchers, Thomas Caudell and David Mizell, analysed the situation and developed a goggle system that would let workers access the CAD system directly. They built a headset that augmented the user’s field of vision with updated designs.
Like Boeing, throughout the 1990s and 2000s, several organisations tried their hand at making HMD-inspired products. Several military organisations were building and prototyping devices that gave the wearer an additional layer of information. Their goal was to make devices that would either create virtual environments or add layers of information over the user’s field of vision with real-time information.
Changing lanes
While those experiments led to a new world of immersive experience technologies, they did not receive as much spotlight as other advances in personal consumer tech in the 2000s.
But, in the early 2010s, when two versions of HMD-based products hit the market, that perception began to change. One was launched by a start-up and another by a tech giant. The former became a success, while the latter failed.
The one that did not succeed was Google glass. The search giant’s AR glasses were definitely ahead of their time. They provided users an augmented view of the space they were in. But the tech giant’s AR glasses were not well received by consumer tech enthusiasts.
The product was criticised for its limited functionality. Some noted that the spectacles violated privacy of people around the user. The search giant re-packaged it for business customers, after which the product took on different shapes and sizes before it was once and for all shelved in 2022.
While Google’s AR spectacles suffered, another start-up succeeded.
Oculus, founded by Palmer Luckey, built an HMD prototype in the early 2010s that took the gaming world by storm. The start-up was soon bought by Facebook-owner Meta Inc. And its first consumer headset, called Oculus Rift VR, was rebranded and sold as Meta Quest.
Mr. Palmer’s idea of building VR devices was focused on making gaming immersive. But the VR company’s new top boss, Mark Zuckerberg, had other plans. He was looking for a device that would make his moonshot a reality — the metaverse.
Subsequently, Meta acquired Surreal Vision, a U.K.-based company that was in the business of reconstructing real-time scenes, and integrated it with Oculus VR. The integration brought Oculus’s VR headsets 3D capabilities, which enabled it to build products both for VR and AR, in effect creating a mixed reality headset.
Apple’s Vision
Apple is now walking into this world with a $3,500 headset. The iPhone maker’s headset comes with state-of-the-art hand- and eye-tracking features. It has a bunch of immersive apps, and a set of videos that are built for three-dimensional viewing.
While there are some first-generation bugs, this device offers an upgraded alternative to Meta’s top-of-the-line Quest Pro VR headsets, which are retailing at less than half the price of a Vision Pro headset. Despite the expensive price tag, which will definitely make consumers cringe, the Vision Pro clearly sets a new bar in headset technology with Its unique operating system, visionOS, that takes the user’s gaze as input.
Apple has broken new ground by accomplishing the gaze input tech. So, in some sense, the Vision Pro offers AR, VR, and a little bit more. And that extra bit is making Apple force developers into calling the company’s device a spatial computer.
According to long-time mixed reality researcher Louis Rosenberg, spatial computing “is a great overarching term for AR, MR, and VR, along with other immersive experiences such as 3D movies and telepresence.”
In his medium blog, Dr. Rosenberg recommends companies accept the language of the field.
“Spatial computing is a useful term, but so is augmented reality, mixed reality, and virtual reality, all three of which are part of our history and culture,” he concluded.
Email