Amazon AWS’s Martin Beeby, principal developer advocate, announced during a keynote presentation at the New York City AWS Summit a new way to run workloads globally, AWS Cloud WAN, the “WAN” standing for wide-area-network.
“It’s one console to manage everything,” said Beeby. Using Cloud WAN allows a company to run programs as a combination of AWS availability zones, including AWS “local zones” and AWS “Outposts,” deployments that are inside of companies’ own data centers, noted Beeby, along with numerous other AWS resources such as edge computing.
The one-day New York Summit, which took place at Jacob Javitz Convention Center in Midtown Manhattan, was the second U.S.-based Summit event to be held live following an event in San Francisco. Beeby was filling in for Amazon CTO Werner Vogels, who had been scheduled to give the keynote but was not able to present because he was under the weather. “Think of me as the availability zone for Werner Vogels,” quipped Beeby.
Cloud WAN is presented as a way to easily set up and manage multiple parts of an AWS-based network around the world, including AWS “virtual private clouds,” or VPCs, along with customers’ own on-premise private cloud offerings. More details are available in a blog post by AWS principal developer advocate Sébastien Stormacq.
Customers using local zones include Finnish game maker SuperCell, authors of Clash of Clans, noted Beeby. They use local zones in the U.S. to reduce the latency experienced by their U.S. gamers. Financial exchange firm Nasdaq, Inc., is using AWS Outposts in order to run AWS services in its own corporate data center.
AWS announced that Delta Airlines has chosen it as the “preferred cloud provider”
Beeby spoke extensively about Amazon’s lineup of custom computer chips, which include “Trainium,” for training machine learning forms of AI; “Inferentia,” for accelerating predictions with trained models; and “Graviton,” designed to accelerate general-purpose workloads.
The third generation of Graviton was unveiled in late May, “Graviton3,” and some AWS customers have been moving workloads to the new chip with favorable results in cost savings and performance improvements.
An example of Graviton3 usage was Honeycomb dot io, which builds programs for observability. Said Liz Fong-Jones, principle developer advocate at Honeycomb, the company was able to cut costs by 60% from prior uses of AWS by running programs in the seventh-generation instances of AWS, “C7G,” moving from fifth-generation instances running on the prior Graviton2 chip that Honeycomb had previously used.
“As many people as possible should be adopting it [Graviton3],” said Fong-Jones, “because it’s much more energy-efficient, much more carbon-efficient,” Fong-Jones told ZDNet in an interview following the keynote.
Graviton3 has lead to significant performance advantages, said Fong-Jones. Honeycomb competes with numerous companies in observability and application performance monitoring, including fellow startup Lightstep.
“Honeycomb is uniquely differentiated by our speed and scale, and AWS enables us to achieve that speed and scale,” said Fong-Jones.
“As far as how we do it, it’s Graviton, it’s Lambda, it’s Spot compute — those have been foundational technologies for us,” said Fong-Jones, referring to AWS’s serverless technologies, under the AWS Lambda umbrella; and “Spot” instances, the use of spare EC2 capacity that Amazon offers at discounts of up to 90% compared to normal AWS EC2 instances.
Describing speed and scale, Fong-Jones told ZDNet, “ten seconds [query time] is the limit of what we consider acceptable, the median query time is under 500 milliseconds, whereas our competitors can take 30 seconds, a minute, two minutes.
While “competitors may say we return results immediately, those competitors are pre-aggregating the data, which limits the dimensions you can query on,” said Fong-Jones. In contrast, “Honeycomb is using a columnar index format, and we can give you any combination of fields on the fly.” That nimble search is made possible by AWS Lambda. That is important because observability really means “open-ended” question asking, said Fong-Jones.
During the keynote, Beeby made a point of urging developers to use AWS Lambda, both for cost savings and as a more energy-efficient infrastructure.
“You should really start considering serverless it can genuinely save you money and help us save the planet,” said Beeby.
Serverless technology is a kind of stateless micro-services form of compute, which means that it can handle individual requests for resources, such as individual database queries, without the overhead of an entire server instance. It can thereby save on computing power and cost for certain kinds of work.
Beeby discussed a serverless version of the Amazon Redshift data warehouse program. It is one of three analytics programs announced Tuesday in serverless versions, the other two being EMR, the Apache Spark service, and a serverless offering of Apache Kafka, the streaming events service.
Serverless computing is not yet available on Graviton3. Companies such as Honeycomb run AWS serverless workloads on Graviton2 or other processors. Over time, AWS generally makes available new types of workloads such as serverless across its chip offerings, which suggests that at some point, Graviton3 will support serverless.
Beeby’s talk was interrupted multiple times by protestors shouting from the audience various denouncements of AWS, including “Stop hurting immigrants, stop contracting with ICE,” referring to the U.S. Immigration and Customs Enforcement division of the Department of Homeland Security. The protestors were escorted out of the hall by Javitz Convention Center security.
After repeated interruptions, the audience began to boo subsequent protestors and clapped for Beeby.