“This is really cool and also super terrifying in an existential sort of way,” has become a pretty typical response to new consumer technology in the past few years. While not a particularly fresh response to new technology, historically speaking, I think the recent frequency of this reaction is notable. The two biggest triggers of this sort of response are the fields of generative artificial intelligence (AI) and virtual reality (VR).
Generative AI, while producing many impressive and often humorous results in its text-, image-, audio-, and video-generating capabilities, has also made deceiving and nefarious behavior easier and more convincing than it has ever been. Conversations about the probability that AI will kill all of us, known as P(doom) are fairly common among employees in the AI industry, with many predicting at least double digit odds, per The New York Times.
VR technology, while fascinating in its abilities to replicate reality, still produces a largely isolating experience inherent when one straps a computer to one’s face. One of the most recent VR headsets available to consumers in the industry, the Apple Vision Pro headset, is exemplary of this problem. While the Vision Pro is impressive in its ability to mix reality and virtual space, it poorly integrates its social features, clunkily displaying the wearer’s eyes on the device’s outward-facing display and rendering FaceTime avatars that do not look quite right. These changes are not adequate to mitigate the inherently isolating experience of using VR, and yet Apple charges way more for their headsets. Selling headsets at a fraction of the price, Meta has put large investments into creating the Metaverse on its headsets, a virtual social space with still largely sterile and uninspired environments. While VR is not looking as dystopian as AI at the moment, what VR lacks in dystopia, it makes up for in its inadequate social features.
If the two biggest areas in technology right now are either disappointing or scary, what do we have to be excited about or unafraid of at this time? Well, this past week I discovered a relatively new and somewhat obscure piece of technology that actually wowed me: photogrammetry. Photogrammetry, for the purposes of this article, will be defined as the use of photos to create renderings of 3D objects with respect to shape and scale. This is through a process that I am going to call “scanning,” where a camera is moved around a subject to capture it from many angles. After scanning, the program renders a 3D object. While this is not necessarily cutting-edge technology, the fact that it has become accessible on smartphones is a fairly new and exciting phenomenon.
My first experiment with photogrammetry was scanning my room. The result, while blocky, was a detailed rendering of my room that I could navigate through from every angle. It made me think about the preservative potential of photogrammetry and all the spaces that I have temporarily moved through, like previous dorm rooms, that I can no longer access. I have often tried to document such spaces through photos with underwhelming results. My blocky, poorly rendered depiction of my room is still much more illuminating and immersive than any room photography I have taken.
While the most common usage for photogrammetry is rendering rooms, buildings and objects, it can also scan people, which was an unexpected joy. Scanning my friends became a brief hobby of sorts. “Hey, can I scan you?” has been a fairly common question I have asked my friends with much intrigue in response. They would sit or stand still as I awkwardly traced a phone around their body for a few minutes. The renderings produce both awe-inspiring and horrific results. In this context, photogrammetry produces bad taxidermy. The errors of photogrammetry are hilariously glaring. While some renderings I have of people are quite rich and truthful, many have distorted and weirdly angular sections. All of these 3D renderings can then be viewed in real environments too, creating wild images and videos where people confront their photogrammetry friends or their photogrammetry selves.
This has really been one of the most fun weeks I have had with recent technology. It is perhaps no coincidence that the app is owned by Niantic Labs, creators of Pokémon Go, a Google April Fools joke-turned-legit product that was an incredibly satisfying use of GPS and AR technology to recreate Pokemon-catching. This is a bold claim, but I legitimately think that is the last time that millions of people collectively had fun engaging with new technology without feeling a lot of existential dread about it. I could imagine photogrammetry making some sort of cultural impact sometime soon, maybe in a gaming context.
Like any piece of technology, though, photogrammetry is not without its problems. One issue worth considering is consent. While successfully scanning someone without their consent is difficult because it requires stillness, scanning their belongings and their spaces is not. This problem is brought to its outer edges in Nathan Fielder’s 2022 documentary series “The Rehearsal,” where Fielder deceives one of his subjects into allowing his crew to scan and map out their apartment layout using photogrammetry. Fielder then meticulously recreates the apartment on a soundstage so he can practice a future conversation with the subject in their home, with an actor standing in for the subject in rehearsals. Though recreating someone’s apartment is seriously invasive behavior, the process is so outlandishly laborious that it is really at the fringes of what most people are capable of doing. While AI makes unethical behavior easier, photogrammetry in its present form really does not.
I will admit part of the aim of this piece is to encourage people to try a relatively new and accessible piece of software without feeling like it will one day wreck the earth. Photogrammetry is what I think people want out of new consumer tech: It is mind-blowing, unintentionally funny, at times frightening, but rarely terrifying.