Big data is a sham. For years now, we have been told that every company should save every last morsel of digital exhaust in some sort of database, lest management lose some competitive intelligence against … a competitor, or something.
There is just one problem with big data though: it’s honking huge.
Processing petabytes of data to generate business insights is expensive and time consuming. Worse, all that data hanging around paints a big, bright red target on the back of the company for every hacker group in the world. Big data is expensive to maintain, expensive to protect, and expensive to keep private. And the upshot might not be all that much in the end after all — oftentimes, well-curated and chosen datasets can provide faster and better insight than endless quantities of raw data.
What should a company do? Well, they need a Tonic to ameliorate their big data sins.
Tonic is a “synthetic data” platform that transforms raw data into more manageable and private datasets usable by software engineers and business analysts. Along the way, Tonic’s algorithms de-identifies the original data and creates statistically identical but synthetic datasets, which means that personal information isn’t shared insecurely.
For instance, an online shopping platform will have transaction history on its customers and what they purchased. Sharing that data with every engineer and analyst in the company is dangerous, since that purchase history could have personally identifying details that no one without a need-to-know should have access to. Tonic could take that original payments data and transform it into a new, smaller dataset with exactly the same statistical properties, but not tied to original customers. That way, an engineer could test their app or an analyst could test their marketing campaign, all without triggering concerns about privacy.
Synthetic data and other ways to handle the privacy of large datasets has garnered massive attention from investors in recent months. We reported last week on Skyflow, which raised a round to use polymorphic encryption to ensure that employees only have access to the data they need and are blocked from accessing the rest. BigID takes a more overarching view of just tracking what data is where and who should have access to it (i.e. data governance) based on local privacy laws.
Tonic’s approach has the benefit of helping solve not just privacy issues, but also scalability challenges as datasets get larger and larger in size. That combination has attracted the attention of investors: this morning, the company announced that it has raised $8 million in a Series A led by Glenn Solomon and Oren Yunger of GGV, the latter of whom will join the company’s board.
The company was founded in 2018 by a quad of founders: CEO Ian Coe worked with COO Karl Hanson (they first met in middle school as well) and CTO Andrew Colombi while they were all working at Palantir, and Coe also formerly worked with the company’s head of engineering Adam Kamor while at Tableau. That training at some of the largest and most successful data infrastructure companies from the Valley forms part of the product DNA for Tonic.
Coe explained that Tonic is designed to prevent some of the most obvious security flaws that arise in modern software engineering. In addition to saving data pipelining time for engineering teams, Tonic “also means that they’re not worried about sensitive data going from production environments to lower environments that are always less secure than your production systems.”
He said that the idea for what would become Tonic originated while troubleshooting problems at a Palantir banking client. They needed data to solve a problem, but that data was super sensitive, and so the team ended up using synthetic data to bridge the difference. Coe wants to expand the utility of synthetic data to more people in a more rigorous way, particularly given the legal changes these days. “I think regulatory pressure is really pushing teams to change their practices” around data, he noted.
The key to Tonic’s technology is its subsetter, which evaluates raw data and starts to statistically define the relationships between all the records. Some of that analysis is automated depending on the data sources, and when it can’t be automated, Tonic’s UI can help a data scientist onboard datasets and define those relationships manually. In the end, Tonic generates these synthetic datasets usable by all the customers of that data inside a company.
With the new round of funding, Coe wants to continue doubling down on ease-of-use and onboarding and proselytizing the benefit of this model for his clients. “In a lot of ways, we’re creating a category, and that means that people have to understand and also get the value [and have] the early-adopter mindset,” he said.
In addition to lead investor GGV, Bloomberg Beta, Xfund, Heavybit and Silicon Valley CISO Investments participated in the round as well as angels Assaf Wand and Anthony Goldbloom.