Deep learning startup Deci today announced that it raised $9.1 million in a seed funding round led by Israel-based Emerge. According to a spokesperson, the company plans to devote the proceeds to customer acquisition efforts as it expands its Tel Aviv workforce.
Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington’s Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.
Deci was cofounded by Yonatan Geifman, entrepreneur Jonathan Elial, and Ran El-Yaniv, a computer science professor at Technion in Haifa, Israel. (Geifman and El-Yaniv met at Technion, where Geifman is a PhD candidate at the university’s computer science department.) By leveraging data science techniques, the company claims to be able to accelerate deep learning runtime by up to 10 times on any hardware by redesigning models to amplify throughput and minimize latency.
Deci ostensibly achieves runtime acceleration through data preprocessing and loading, selecting model architectures and hyperparameters (i.e., the variables that influence a model’s predictions), and model optimization for inference. It also takes care of steps like deployment, serving, and monitoring and explainability. According to Deci, the platform supports containerized deployments across Amazon Web Services, Microsoft Azure, Google Cloud Platform, and other cloud environments. It also continuously tracks the models, sending alerts and recommendations when customers can migrate to more cost-effective AI accelerators.
“Deci’s platform offers a substantial performance boost to existing deep learning models while preserving their accuracy,” the company writes on its website. “It designs deep models to more effectively use the hardware platform they run on, be it CPU, GPU, FPGA, or special-purpose ASIC accelerators. The … accelerator is a data-dependent algorithmic solution that works in synergy with other known compression techniques, such as pruning and quantization. In fact, the accelerator acts as a multiplier for complementary acceleration solutions, such as AI compilers and specialized hardware.”
Deci goes on to explain that its accelerator redesigns models to create new models with several computation routes, all optimized for a given inference device. Each route is specialized with a prediction task, and Deci’s router component ensures that each data input is directed via the proper route.
Deci has competition in OctoML, a startup that similarly purports to automate machine learning optimization with proprietary tools and processes. Other competitors include DeepCube, which describes its solution as a “software-based inference accelerator,” and Neural Magic, which redesigns AI algorithms to run more efficiently on off-the-shelf processors by leveraging the chips’ available memory. Yet another rival, DarwinAI, uses what it calls generative synthesis to ingest models and spit out highly optimized versions.
Deci says that when tested on MLPerf, a benchmark suite for measuring deep learning performance, its platform accelerated the inference speed of the popular ResNet neural network on Intel processors by 11.8 times while meeting the accuracy target. The company claims it has already has customers in “numerous” autonomous vehicle, manufacturing, communication, video and image editing, and health care companies.
Square Peg participated in Deci’s seed round.