If only 10% of the world had enough power to run a cell phone, would mobile have changed the world in the way that it did?
It’s often said the future is already here — just not evenly distributed. That’s especially true in the world of artificial intelligence (AI) and machine learning (ML). Many powerful AI/ML applications already exist in the wild, but many also require enormous computational power — often at scales only available to the largest companies in existence or entire nation-states. Compute-heavy technologies are also hitting another roadblock: Moore’s law is plateauing and the processing capacity of legacy chip architectures are running up against the limits of physics.
If major breakthroughs in silicon architecture efficiency don’t happen, AI will suffer an unevenly distributed future and huge swaths of the population miss out on the improvements AI could make to their lives.
The next evolutionary stage of technology depends on completing the transformation that will make silicon architecture as flexible, efficient and ultimately programmable as the software we know today. If we cannot take major steps to provide easy access to ML we’ll lose unmeasurable innovation by having only a few companies in control of all the technology that matters. So what needs to change, how fast is it changing and what will that mean for the future of technology?
An inevitable democratization of AI: A boon for startups and smaller businesses
If you work at one of the industrial giants (including those “outside” of tech), congratulations — but many of the problems with current AI/ML computing capabilities I present here may not seem relevant.
For those of you working with lesser caches of resources, whether financially or talent-wise, view the following predictions as the herald of a new era in which organizations of all sizes and balance sheets have access to the same tiers of powerful AI and ML-powered software. Just like cell phones democratized internet access, we see a movement in the industry today to put AI in the hands of more and more people.
Of course, this democratization must be fueled by significant technological advancement that actually makes AI more accessible — good intentions are not enough, regardless of the good work done by companies like Intel and Google. Here are a few technological changes we’ll see that will make that possible.
From dumb chip to smart chip to “genius” chip
For a long time, raw performance was the metric of importance for processors. Their design reflected this. As software rose in ubiquity, processors needed to be smarter: more efficient and more commoditized, so specialized processors like GPUs arose — “smart” chips, if you will.
Those purpose-built graphics processors, by happy coincidence, proved to be more useful than CPUs for deep learning functions, and thus the GPU became one of the key players in modern AI and ML. Knowing this history, the next evolutionary step becomes obvious: If we can purpose-build hardware for graphics applications, why not for specific deep learning, AI and ML?
There’s also a unique confluence of factors that makes the next few years pivotal for chipmaking and tech in general. First and second, we’re seeing a plateauing of Moore’s law (which predicts a doubling of transistors on integrated circuits every two years) and the end of Dennard scaling (which says performance-per-watt doubles at about the same rate). Together, that used to mean that for any new generation of technology, chips doubled in density and increased in processing power while drawing the same amount of power. But we’ve now reached the scale of nanometers, meaning we’re up against the limitations of physics.
Thirdly, compounding the physical challenge, the computing demands of next-gen AI and ML apps are beyond what we could have imagined. Training neural networks to within even a fraction of human image recognition, for example, is surprisingly hard and takes huge amounts of processing power. The most intense applications of machine learning are things like natural language processing (NLP), recommender systems that deal with billions or trillions of possibilities, or super high-resolution computer vision, as is used in the medical and astronomical fields.
Even if we could have predicted we’d have to create and train algorithmic brains to learn how to speak human language or identify objects in deep space, we still could not have guessed just how much training — and therefore processing power — they might need to become truly useful and “intelligent” models.
Of course, many organizations are performing these sorts of complex ML applications. But these sorts of companies are usually business or scientific leaders with access to huge amounts of raw computing power and the talent to understand and deploy them. All but the largest enterprises are locked out of the upper tiers of ML and AI.
That’s why the next generation of smart chips — call them “genius” chips — will be about efficiency and specialization. Chip architecture will be made to optimize for the software running on it and run altogether more efficiently. When using high-powered AI doesn’t take a whole server farm and becomes accessible to a much larger percentage of the industry, the ideal conditions for widespread disruption and innovation become real. Democratizing expensive, resource intensive AI goes hand-in-hand with these soon-to-be-seen advances in chip architecture and software-centered hardware design.
A renewed focus on future-proofing innovation
The nature of AI creates a special challenge for the creators and users of AI hardware. The amount of change itself is huge: We’re living through the leap from humans writing code to software 2.0 — where engineers can train machine learning programs to eventually “run themselves.” The rate of change is also unprecedented: ML models can be obsolete in months or even weeks, and the very methods through which training happens are in constant evolution.
But creating new AI hardware products still requires designing, prototyping, calibrating, troubleshooting, production and distribution. It can take two years from concept to product-in-hand. Software has, of course, always outpaced hardware development, but now the differential in velocity is irreconcilable. We need to be more clever about the hardware we create for a future we increasingly cannot predict.
In fact, the generational way we think about technology is beginning to break down. When it comes to ML and AI, hardware must be built with the expectation that much of what we know today will be obsolete by the time we have the finished product. Flexibility and customization will be the key attributes of successful hardware in the age of AI, and I believe this will be a further win for entire market.
Instead of sinking resources into the model du jour or a specific algorithm, companies looking to take advantage of these technologies will have more options for processing stacks that can evolve and change as the demands of ML and AI models evolve and change.
This will allow companies of all sizes and levels of AI savvy to stay creative and competitive for longer and prevent the stagnation that can occur when software is limited by hardware — all leading to more interesting and unexpected AI applications for more organizations.
Widespread adoption of real AI and ML technologies
I’ll be the first to admit to tech’s fascination with shiny objects. There was a day when big data was the solution to everything and IoT was to be the world’s savior. AI has been through the hype cycle in the same way (arguably multiple times). Today, you’d be hard pressed to find a tech company that doesn’t purport to use AI in some way, but chances are they are doing something very rudimentary that’s more akin to advanced analytics.
It’s my firm belief that the AI revolution we’ve all been so excited about simply has not happened yet. In the next two to three years however, as the hardware that enables “real” AI power makes its way into more and more hands, it will happen. As far as predicting the change and disruption that will come from widespread access to the upper echelons of powerful ML and AI — there are few ways to make confident predictions, but that is exactly the point!
Much like cellphones put so much power in the hands of regular people everywhere, with no barriers to entry either technical or financial (for the most part), so will the coming wave of software-defined hardware that is flexible, customizable and future-proof. The possibilities are truly endless, and it will mark an important turning point in technology. The ripple effects of AI democratization and commoditization will not stop with just technology companies, and so even more fields stand to be blown open as advanced, high-powered AI becomes accessible and affordable.
Much of the hype around AI — all the disruption it was supposed to bring and the leaps it was supposed to fuel — will begin in earnest in the next few years. The technology that will power it is being built as we speak or soon to be in the hands of the many people in the many industries who will use their newfound access as a springboard to some truly amazing advances. We’re especially excited to be a part of this future, and look forward to all the progress it will bring.