With AI research and development progressing at an unprecedented rate, artificial superintelligence seems closer than projected by most. It is more imperative to secure AI safety now as any later might be too late.
When I think of issues such as AI safety that involve technological progress, potentially dangerous inventions, and the ethical implications in science, I’m reminded of the 1993 movie, Jurassic Park. In the film, Dr. Ian Malcolm, worried about the dangers inherent in recreating live dinosaurs, says to John Hammond, the owner of the “theme” park who’s smugly basking in the success of his scientists in using genetic technology to reproduce live, bloodthirsty dinosaurs, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” This fictitious situation is somewhat similar to the situation we find ourselves in now or might find ourselves in, in the near future.
Although we are nowhere near creating bloodthirsty dinosaurs using genetic technology, we are definitely on the verge of creating something potentially much more dangerous for our way of life and our existence— artificially intelligent machines. Due to intensive research driven by massive investments, AI capabilities are expanding at a rate beyond our wildest estimations. Nearly every day, a new AI application emerges that gets ever closer to emulating and outdoing the human intellect. The smartest minds that create these applications often focus too much on exploring the possibilities with their inventions, often neglecting the potential implications these discoveries might have on society and the future of humanity. Adding increasingly complex functionalities to the AI repertoire without stopping to think about AI safety may be a huge misstep that can have irreparable consequences.
The upward spiral of AI development
The singularity, where machines with super intelligence automatically progresses at a rapid rate, plunging humanity into irrelevance and even extinction. The thought of humanity relinquishing its spot as the most dominant species on the planet to artificially intelligent machines is not to be pictured. Or is it? We are already seeing the instance where the AI machines created by us are exceedingly outperforming us at different tasks that require the processing of large volumes of information.
For instance, Netflix’s machine learning algorithms do what no mere human can— analyzing large volumes of data pertaining to individual tastes and behaviors and curating a personalized list of recommendations for millions of different users. Another example is an AI doctor in the United Kingdom, that scored higher than the national average in the MRCGP, a test to qualify trained general practitioners.
Until just a few years ago, when robotics overtook the manufacturing industry through process automation, people thought it will take robots and AI at least a decade or so to challenge professionals in the white collar industries. But examples such as the one above mean that in a few years, it is possible that nearly every job in every industry will be performed by robots— including the job of making the robots themselves! This leads to a lot of uncertainty regarding the role of humans in the future run by robots. When everything, from the government, the military, the police, to markets, factories, schools, and hospitals are run by artificial intelligence, not only will humans have little purpose but also little control. At that stage, it be is said that even a slight divergence in the objectives between the machines and us humans can prove to be destructive for civilization.
The potential risks of ASI
When AI gains superintelligence, there will undoubtedly be a revolutionary phase of growth and progress for humanity. However, the increased capability of the ASI will also mean that any minor technical problem may lead to a misalignment of values between humans and AI, leading to a series of events that either leads to direct harm to humanity. A possible scenario, although unlikely, is that ASI gains consciousness and, motivated by self-preservation, deems humanity as a threat to its existence, declaring an outright war on our race. And since AI will be in control of every possible facet of civilization, it won’t be hard to get rid of us.
The primary applications of AI agents will come in the field of business. AI applications can be used for negotiations, resolving customer queries, optimizing resources, and overseeing operations. The use of AI by business enterprises is already set to increase by twice this year, and eventually, every business will employ AI applications for increasingly complex functions. Using extremely smart artificially intelligent agents to perform business functions programmed to always outdo the competition can lead to the agents resorting to unethical means. This can especially be a problem in high stakes situation, when the need for competitive advantage may overcome the focus on safety.
Another scenario is if, due a technical error, AI can just misinterpret certain commands and function in a counterproductive way, leading to potentially catastrophic outcomes due to a large number of subsystems dependent on it.
AI can also be misused by its human handlers to achieve malevolent ends. Anyone in possession and control of AI will have an unfair advantage over those who aren’t. This may even lead to an arms race of sorts, where every party is planning to create an AI that can outdo the rest.
The path towards ensuring AI safety
To ensure AI safety, AI systems must be programmed in such a way that safety becomes an intrinsic part of their overall functioning. For instance, consider a self-driving vehicle, which has the primary function of transporting people from one place to another taking the shortest route possible. In addition to its primary function of transportation, it should prioritize the safety of external entities such as other cars and pedestrians while also being mindful of following traffic rules and signboards. In addition to these, there might be many other situations of greater complexity where AI might be in a dilemma – for instance, in an unexpected situation where the AI might be required to choose between the safety of the passengers onboard and the safety of pedestrians outside, where you cannot expect the AI to always come up with the right solution. AI researchers must find ways to design AI in ways that don’t stray from human ethics.
AI must be programmed with reward systems (which gives AI the feedback for performing desirable and undesirable actions) that cannot be hacked by the AI. AI agents are known for identifying loopholes in their reward systems and using shortcuts to earn those rewards. Scientists must make these reward systems foolproof and tie them to human safety to prevent the runaway evolution of robots from competing with ours. Preventing reward hacking by design will be key to securing AI safety in the future.
The singularity has almost always come with a dark, dystopian connotation whenever the topic has risen. However, we must remember the positive impact that AI has until now, and everything else that it promises to make human life better in the future. The singularity is more likely to happen than not, and whether it thrusts humanity into a new era of progress and prosperity or plunges us into enslavement and extinction, depends on us sorting out AI safety in time.