ZDNET explores immersive computing and a related but different technology called digital twins. Immersive computing has many different names and acronyms, including VR (virtual reality), AR (augmented reality), MR (mixed reality), and XR (extended reality). Let’s break down what it all means.
What are XR and spatial computing?
When Apple introduced the Vision Pro headset, it bundled the different acronyms into one and made popular the term “spatial computing.” Others did the same by wrapping VR, AR, and MR into one acronym, XR.
We’ll also use the term XR in this guide to refer to the whole brick-on-your-face class of products and experiences and deconstruct the variations on an acronym-by-acronym basis.
Special Feature
In short, XR is an umbrella term for immersive experiences, while spatial computing includes XR, AI, sensor technology, and IoT capabilities, allowing digital content to more fully interact with and understand physical space.
According to Statista, the entire XR market is expected to be worth $100 billion by 2026. That might seem a bit over-optimistic, since there is still some pushback on the part of users when it comes to purchasing and using the technology. Few people like the idea of strapping something that weighs as much as an iPad to their face for hours at a time.
Also: What is spatial computing and how does it work?
However, there is truly something special in the technology. If you try on a $3,500 Apple Vision Pro or the less expensive $500 Meta Quest 3, you will experience something altogether different from what you feel when sitting in front of your laptop. There is a sense of immersion that can change perspective and understanding. Some of the experiences are cheesy, yes, but some are breathtaking.
In just about half a year, the Apple and Meta products have moved XR up on the “Diffusion of Innovation” curve. Apple’s offering, while now mostly of interest to early adopters, developers, and product reviewers, has telegraphed that XR is relevant and may someday be mainstream. Though sales for the Vision Pro haven’t been disclosed, Apple has embraced the technology, which is effectively an endorsement of the trend.
On the other hand, the amazingly functional Quest 3 is at a cost point that’s moving XR into the mainstream, at least for gamers and those seeking unique entertainment content experiences. While we don’t have exact sales figures for the Quest 3, Meta announced that retention numbers are greater than for any other headset. AR news site UploadVR estimates that more than a million Quest 3 devices have been sold, which is based on the leaderboard numbers for the Quest 3 demo First Encounters provided to all purchasers.
There is undoubtedly a market here, both in consumer entertainment and in industry. Some applications now already make it worth wearing the heavy headset. Others will explode once the cost, discomfort, size, and weight of the headsets come down.
What is virtual reality?
VR involves using technology to provide an immersive experience, usually by replacing everything within the user’s field of view with simulated graphics. These images move as the user tilts their head to maintain field of vision. Essentially, you get dropped into a virtual or simulated environment where you can look all around and what you see reflects the simulated environment.
In the early days, AR and VR were deemed very different. That was a time before digital cameras were pervasive, so while crude forms of VR were demonstrated, AR was still mostly a science fiction concept.
Also: Virtual reality has found a new role: Teaching doctors to deal with patients
But that’s no longer the case. Today’s headsets can do all the functions of VR, AR, mixed reality, and more. These terms are today merely variations of a single application, where VR is a catch-all for the entire XR milieu and where the outside world is completely replaced by a simulated environment. The Disney+ app on the Vision Pro, for example, can make it look like you’re watching a movie from within the walls of a grand theater.
VR environments deliver fully immersive experiences, whether for gameplay, entertainment, or simulation in a digital twin, such as a factory. They’re also good for product development because they enable developers to see and manipulate a product design in a simulated environment well before the product reaches production.
What is augmented reality?
Unlike VR, which seeks to immerse the user in a completely virtual environment, AR enhances the real world using digitally produced perceptual overlays. Essentially, you see images or information in your space that are not physically there.
In Marvel’s What…If? immersive game, produced exclusively for Apple Vision Pro, an eight-foot genie appears in the room with you as soon as the game begins. He gives instructions for the rest of the game, which switches back and forth from AR (with the genie) to VR (in the immersive Marvel environments where you fight baddies).
Also: If you have an Apple Vision Pro, Marvel’s ‘What…If?’ is a must-download – and it’s free
AR was used for visualization even before headsets became popular. IKEA in 2017 introduced an iPad app that lets you preview what a piece of furniture would look like in your home. Let’s also not forget the worldwide phenomenon that is Pokemon Go. The Nintendo game combines the GPS function and camera on mobile phones to point players toward little animated creatures in locations around the real world. The Pokemon characters can be viewed via the mobile phone display and “caught” by tapping the screen. Introduced in 2016, the game is still going strong today with an estimated 5 million users.
AR, when implemented inside a head-tracking headset, allows the user to look around and see simulated objects in a physical space. Both Apple Vision Pro and Meta Quest 3 have a front-facing camera that captures live video of the user’s environment and projects the video onto the small display in real time (and with no perceptible lag).
Plus, with AR, you can see what’s around you, which means you’re less likely to fall over or bang into things and hurt yourself like you can with VR.
What is mixed reality?
MR refers to a kind of augmented reality, in which graphical visualizations are projected to appear as though they’re interacting with the real world. Think of MR as AR with extra features.
Fundamentally, MR anchors projected graphics to real-world locations. For example, you could put up a virtual poster on a real wall. The poster would remain exactly where it was placed, as if it really were mounted on that spot on the wall, even as you walk around your room. Car models can be placed on tables and virtual racing cars can run along a counter, “falling off” when they reach the edge.
Also: The day reality became unbearable: A peek beyond Apple’s AR/VR headset
This kind of virtual anchoring takes a lot more processing than simply overlaying a graphic. The underlying software needs to be able to recognize objects, have some idea about their characteristics, and be able to identify them regardless of the angle at which they are viewed.
What gear is necessary to run XR?
The main enabling technology for XR is the head-mounted display. For fully immersive experiences, this contains two very small high-resolution displays, one for each eye. It also typically contains head-tracking technology that tells the software where you’re looking, so it can redraw the scene appropriately.
For AR, high-resolution cameras are embedded in the display to provide real-world pass-through. There’s also usually a spatial audio capability, although earbuds or headphones are often used for higher-fidelity sound.
Also: Ready for takeoff: Airbus’s sweeping mixed reality redesign
By far, the most popular of these devices is the $500 Quest 3 from Meta, which ZDNET declared as its 2023 Product of the Year. The Quest 3 also comes with a set of hand controllers that enable the user to control actions within the XR experience.
Controllers are common XR hardware components, but some devices, including Apple Vision Pro and Quest 3 (in hand gesture mode), can function using the wearer’s hands as pointing devices.
Other vendors besides Apple and Meta have introduced their own head-mounted displays. Meta now licenses its Horizon OS to other vendors, so we can expect more products to emerge in this category.
While there is a difference between the hardware used by professionals, enterprise customers, and consumer market users, the differentiating factors will overlap as the technology gets cheaper.
Professional-level products include Apple Vision Pro, Microsoft HoloLens, and Sony’s mixed-reality head-mounted display. These devices each add features that reflect some of the professional and industrial uses of the device.
For example, Apple Vision Pro uses gaze-tracking and hand gestures rather than controllers. This feature has been adopted by doctors who use Vision Pro during surgery. The Sony device offers color-accurate lenses and what the company calls “split rendering,” which distributes the workload between computers (possibly also in the cloud) and the head-mounted display, enabling the system to create real-time renders of enormously complex 3D models.
Beyond the head-mounted display and controller, and the eye- and hand-tracking we discussed earlier, specialized input and output devices have been created for specialized applications. Here are a few examples:
- Haptic gloves: These provide users with a simulated sense of touch, which may provide more realistic sensations during training scenarios and gaming applications.
- Full-body haptic suit: Full-body suits shroud the entire body, providing haptic feedback, motion capture, and even biometrics. These factors enable the XR app to provide sensation to the user and gather telemetry from the user at the same time.
- Omni-directional treadmill: These treadmills allow users to walk in any direction and reflect that movement within the XR simulation. An experimental project at Disney is extending this concept to allow more than one person to interact on the same floor surface.
Expect to see innovation both in the headset design (primarily in terms of size and weight) as well as in special-purpose input and output accessories.
What is a digital twin?
A digital twin is a virtual replica of a physical environment, object, or system that contains not just physical characteristics, but also up-to-date telemetry that describes the current condition and operation of the real-world twin.
Imagine you and I buy the same computer configured identically on the same day. On the first boot-up, the contents of our storage drives are the same. We install identical backup software that syncs the contents of our drives in our respective cloud storage accounts. As time passes, we do our individual jobs on our computers, install different applications, and fill our hard drives with our own projects. While our computers and cloud backups were identical on the first day, they’ve become different over time. Each machine developed its unique identity.
Also: How a digital twin for intense weather could help scientists mitigate climate change
When applied to complex systems, the twin concept is powerful. It allows an object in the real world, whether that’s a spacecraft, a machine tool, or an entire factory, to operate alongside its virtual twin that’s updated in real time. It is an accurate mirror of its physical twin.
Each digital twin is unique to its original source and includes information that describes the state of that unique system. Digital twins, therefore, are more than just the model of the original object being twinned. They absorb a constant stream of telemetry data about its physical twin, creating a virtual replica that properly represents the condition of its original.
David McKee, ambassador and chair of the Digital Twin Consortium and a Royal Academy of Engineering Enterprise Fellow, explained to ZDNET in an email: “In the simplest of terms, digital twins specifically should have an intelligent element, such as simulation/AI, to allow forecasting the future and enable decisions to be made.”
The digital twin system also includes processes for triggering those decisions, which are often automated, although there are many examples where humans are kept in the loop of business processes, said McKee, who has a PhD in computer science.
Also: AI will unleash the next level of human potential. Here’s how it happens – and when
By combining a software representation (whether in computer-aided design or some other computer model) of an original, along with a constant data feed providing wide-ranging telemetry that keeps the model up-to-date about the condition and status of the physical source, it’s possible to use the twin to represent and analyze aspects of the original. We can also predict future behavior and condition of the physical source by applying simulated tests and current data against the virtual twin.
What businesses can gain the most value from XR and digital twins?
Let’s take a look at several business segments where XR and digital twins offer powerful solutions.
Manufacturing
The ability to see a virtual representation of a product across its entire lifecycle — from product design to production — with the immersive visualization capabilities of XR, can speed up the design process, help find flaws, and reduce the need for expensive prototypes, among other benefits.
Extended into the factory floor, digital twins can help teams maintain and operate production lines, get ahead of potential failures, reduce maintenance costs, and — with XR — help visualize all aspects of production.It can potentially yield enormous cost savings, especially from reduced failures and time to market.
Medical
According to the National Center for Biological Innovation, digital twins in healthcare offer “a key fusion approach of future medicine”, bringing the advantages of precision diagnosis and personalized treatment into reality.
Digital twin and XR applications in healthcare range from personalized medicine (where disease prevention and treatment are based on actual genetic, environmental, and lifecycle characteristics), to surgical simulations (where doctors can train and also explore whether a procedure will be safe on a patient), and virtual models (to predict and evaluate the result of clinical trials).
Automotive
If there’s one industry where AI, XR, and digital twins converge intensely, it has to be the automobile industry. XR visualization is used to augment many aspects of product design, including replacing the decades-old practice of using clay to model body shapes for visualization. AR and MR are used to display data that provide feedback to drivers without taking their eyes off the road.
Add 5G communications to the mix and we get self-driving cars. The AI on these vehicles, as well as AI applications in the edge and cloud, require up to the microsecond digital twin representation of road conditions, traffic, and any other information that can keep the car safe on the road. Without these digital twins, self-driving cars would not be possible.
Engineering
3D modeling and CAD (computer-aided design) tools such as Fusion 360 have long been staples of product design and visualization. CAD tools allow for the simulation of joints in motion as well as full mechanical, electrical, and electronic systems. XR allows for product models to be visualized in situ, moving the model off the flat display into a real-world situation.
Here, product design models can be migrated directly into their digital twins. Once the products move into production and then deployment, they can be managed, maintained, and simulated, and their behavior can be predicted — all using digital twin technology in real time.
Programming
To programmers, digital twins are nothing new. Virtualization (where software simulates a hardware-based computer) has been part of the IT stack for years. Programmers use it to stage and test products before deployment, and to run giant data centers (and everything in between). They also, of course, write the code that runs all the other digital twins, using replicant models to test and simulate their products.
Programmers not only write code for the XR applications but also put on XR headsets when they need a quiet virtualized environment free of distraction (a welcome option during times of chaos while working from home and other tumultuous environments). Additionally, XR headsets can present very large virtual screen real estate, which is known to increase productivity and programming clarity. XR headsets can provide multiple ginormous screens, without having to lug around the hardware.
Media and entertainment
As I’ve discussed before, XR headsets can provide an astounding (if slightly uncomfortable) media viewing environment. They also allow people with small spaces, or who are traveling, to experience media as if they had their own enormous home theater.
Full-immersion games are also popular with consumer XR devices. The Meta app store is filled with XR games and experiences. While digital twins aren’t commonplace in the media and gaming world, games such as Pokemon Go are, essentially, digital twins where the entire physical world is modeled to track the placement of the virtual creatures.
We can also expect game product designers to utilize real-world data from digital twins to provide a deeper immersive experience for players gaming in simulated worlds. Microsoft Flight Simulator, for example, uses real-time weather data to simulate atmospheric weather conditions anywhere in the world. The FIFA and Madden NFL series from EA have modes that incorporate real-time sports and player stats into the game.
How disruptive will XR and digital twins be to businesses?
After generative AI’s sudden and overwhelming disruption of businesses everywhere, managers are understandably a little wary about the potential disruption (even when there are big benefits) from other new technologies.
However, XR and digital twins aren’t expected to have the same disruption pattern as AI. XR has some usability limitations (the headsets are heavy and fairly uncomfortable), so solutions are likely to roll out fairly slowly and mostly to early adopters and businesses, where there are clear benefits to incorporating these applications.
Also: Want to clone yourself? Make a personal AI avatar – here’s how
Eventually, once the technology can be deployed in devices that feel and look like glasses, VR and AR may have considerable disruption, perhaps impacting the design of laptops, TVs, external monitors, and home entertainment centers. In the very long term, one fairly dystopian scenario is described in this story.
As for digital twins, disruption may come in the form of slow rip-and-replace scenarios, where older gear is moved out in favor of new gear that’s able to provide real-time telemetry data to the twin. These deployments, though, in part due to their overall complexity, won’t be disruptive as much as ultimately helpful for the many benefits we’ve discussed in this article.
The time is now
Both XR and digital twin technologies have been around for decades, but it’s only been in the past few years that they have hit an inflection point where they’re practical, functional, and heading into mainstream use.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.