Farewell EC2-Classic, it’s been swell


EC2-Classic in a museum gallery

Discontinuing services is not something we do at AWS. It’s pretty rare. Companies rely on our offerings – their company literally lives from these services – and we take that seriously. For example, SimpleDB still exists, although DynamoDB is the “NoSQL” database of choice for our customers.

So, two years ago, when Jeff Barr has announced that we will be closing EC2-ClassicI’m sure there were at least some a few of you that didn’t believe that we would actually flip the switch – that we would let it run forever. Well, that day has come. On August 15, 2023, we shut down the last instance of Classic. And with all the history here, I think it’s worth celebrating the original version of one of the services that started what we now know as cloud computing.

EC2 has been around for a while, almost 17 years. Only SQS and S3 are older. So I wouldn’t blame you if you asked what makes an EC2 instance “classic”. Simply put, it is the network architecture. When we launched EC2 in 2006, it was a massive 10.0.0.0/8 network. All instances ran on a single, flat network that was shared with other customers. It exposed a handful of features, such as security groups and public IP addresses assigned when an instance boots up. Classic made the process of procuring computing power incredibly easy, even if the stack behind the scenes was incredibly complex. “Invent and simplify” is one of them Amazon Leadership Principles finally …

If you had launched an instance in 2006, an m1.small, you would have gotten a virtual CPU equivalent to a 1.7GHz Xeon processor with 1.75GB of RAM, 160GB of local disk, and 250Mbps of network bandwidth. And it would have only cost $0.10 per clocked hour. It’s pretty incredible where cloud computing has evolved since then: A P3dn.24xlarge offers 100 Gbps of network throughput, 96 vCPUs, 8 NVIDIA v100 Tensor Core GPUs with 32 GiB of memory each, 768 GiB of total system memory, and 1.8 TB of local SSD Storage, not to mention an EFA to accelerate ML workloads.

But 2006 was a different time, and this flat network and small collection of instances, like m1.small, were “classic”. And back then it was really revolutionary. Hardware had become a programmable resource that you could scale up or down at any time. Every developer, every entrepreneur, every startup, and every company now had access to as much computing power as they wanted, whenever they wanted it. The complexities of managing infrastructure, purchasing new hardware, updating software, and replacing failed hard drives had been abstracted. And it changed the way we all designed and built applications.

Of course, the first thing I did when starting EC2 was move this blog to an m1.small. It ran Moving type and this instance was good enough to run the server and local database (no RDS yet). I eventually turned it into a highly available service with RDS failover etc. and it ran there for over five years until… Amazon S3 website feature was published in 2011. The blog has been “serverless” for 12 years now.

As with all of our services, we’ve been paying attention to what our customers need next. This led us to add features like Elastic IP Addresses, Auto Scaling, Load Balancing, CloudWatch, and various new instance types that are better suited to different workloads. By 2013, we had enabled VPC, which allowed each AWS customer to manage their own part of the cloud, secure, isolated and defined for their business. And it became the new standard. It simply gave customers a new level of control, allowing them to build even more comprehensive systems in the cloud.

We continued to support Classic over the next decade, even as EC2 evolved and we implemented an entirely new virtualization platform. Nitro – because our customers used it.

Ten years ago, during my 2013 keynote at re:Invent, I told you that we wanted to “support today’s workloads as much as tomorrow’s,” and our commitment to Classic is the best proof of that. It’s not lost on me how much work goes into such an effort – but it’s exactly the kind of work that builds trust, and I’m proud of the way it was handled. To me, this epitomizes what it means to be customer obsessed. The EC2 team kept Classic running (and running well) until each instance was shut down or migrated. Providing documentation, tools and support from technical and account management teams throughout the process.

It’s bittersweet to say goodbye to one of our original offerings. But we’ve come a long way since 2006 and we’re not done innovating for our customers yet. It’s a reminder that building viable systems is a strategy and thinking about your architectures with an open mind is a must. So, farewell classics, it was great. Long live EC2.

certificate of achievement

Now start building!

Recommended Posts