For all the talk about how artificial intelligence technology is transforming entire industries, the reality is that most businesses struggle to obtain real value from AI. 65% of organizations that have invested in AI in recent years haven’t yet seen any tangible gains from those investments, according to a 2019 survey conducted by MIT Sloan Management Review and the Boston Consulting Group. And a quarter of businesses implementing AI projects see at least 50% of those projects fail, with “lack of skilled staff” and “unrealistic expectations” among the top reasons for failure, per research from IDC.
A major factor behind these struggles is the high algorithmic complexity of deep learning models. Algorithmic complexity refers to the computational complexity of building and running these models in production. Faced with prolonged development cycles, high computing costs, unsatisfying inference performance, and other challenges, developers often find themselves stuck in the development stage of AI adoption, attempting to perfect deep learning models through manual trial-and-error, and nowhere near the production stage. Alternatively, data scientists rely on facsimiles of other models, which ultimately prove to be poor fits for their unique business problems.
If human-developed algorithms inevitably run up against barriers of cost, time, manpower, and business fit, how can the AI industry break those barriers? The answer lies in algorithms that are designed by algorithms – a phenomenon that has been confined to academia to date but which will open up groundbreaking applications across industries when it is commercialized in the coming years.
This new approach will enable data scientists to focus on what they do best – interpreting and extracting insights from data. Automating complex processes in the AI lifecycle will also make the benefits of AI more accessible, meaning it will be easier for organizations that lack large tech budgets and development staff to tap into the technology’s true transformative power.
More of an art than a science
Because the task of creating effective deep learning models has become too much of a challenge for humans to tackle alone, organizations clearly need a more efficient approach.
With data scientists regularly bogged down by deep learning’s algorithmic complexity, development teams have struggled to design solutions and have been forced to manually tweak and optimize models – an inefficient process that often comes at the expense of a product’s performance or quality. Moreover, manually designing such models prolongs a product’s time-to-market exponentially.
Does that mean that the only solution is fully autonomous deep learning models that build themselves? Not necessarily.
Consider automotive technology. The popular dichotomy between fully autonomous and fully manual driving is far too simplistic. Indeed, this black-and-white framing obscures a great deal of the progress that automakers have made in introducing greater levels of autonomous technology. That’s why automotive industry insiders speak of different levels of autonomy – ranging from Level 1 (which includes driver assistance technology) to Level 5 (fully self-driving cars, which remain a far-off prospect). It is plausible that our cars can become much more advanced without needing to achieve full autonomy in the process.
The AI world can (and should) develop a similar mindset. AI practitioners require technologies that automate cumbersome processes involved in designing a deep learning model. Similar to how Advanced Driver Assistance Systems (ADAS) (automatic braking systems, adaptive cruise control) are paving the way towards greater autonomy in the automotive industry, the AI industry needs its own technology to do the same. And it’s AI that holds the key to help us get there.
AI building better AI
Encouragingly, AI is already being leveraged to simplify other tech-related tasks, like writing and reviewing code (which itself is built by AI). The next phase of the deep learning revolution will involve similar complementary tools. Over the next five years, expect to see such capabilities slowly become available commercially to the public.
So far, research on how to develop these superior AI capabilities has remained constrained to advanced academic institutes and, unsurprisingly, the largest names in tech. Google’s pioneering work on neural architecture search (NAS) is a key example. Described by Google CEO Sundar Pichai as a way for “neural nets to design neural nets,” NAS — an approach that began attracting notice in 2017 — involves algorithms searching among thousands of available models, a process that culminates in an algorithm suited to the particular problem at hand.
For now, NAS is a new technology that hasn’t been widely introduced commercially. Since its inception, researchers have been able to shorten runtimes and decrease the amount of compute resources needed to run NAS algorithms. But these algorithms are still not generalizable among different problems and datasets — let alone ready for commercial use — because for each individual use case, one must manually tweak the architecture space for each problem, an approach that is far from scalable.
Most research in the field has been carried out by tech giants like Google and Facebook, as well as academic institutes like Stanford, where researchers have hailed emerging autonomous methods as a “promising avenue” for driving AI progress.
But with innovative AI developers building on the work that’s already been done in this field, the exclusivity of technology like NAS is set to give way to greater accessibility as the concept becomes more scalable and affordable in the coming years. The result? AI that builds AI, thus unleashing its true potential to solve the world’s most complex problems.
As the world looks toward 2021, this is an area ripe for innovation – and that innovation will only beget further innovation.
Yonatan Geifman is CEO and co-founder at Deci.