If you follow AWS news, then it’s impossible to escape re:Invent. As usual, there were a LOT of announcements. If you consider the flood of new features in the preceding days to re:Invent, the overall number of announcements is actually overwhelming. I’m really trying to get my head around all the new stuff that I will be trying out soon!
But, how can you benefit from these features?
I’ve been thinking about how to use them to optimize applications that run on AWS. From my perspective, an optimal application is one that has the right balance of Performance, Price and Availability for your business.
So, here are some thoughts…
Amazon Athena
Analyze data stored in S3, using SQL syntax.
What can you do with it: get faster issue resolution during operational emergencies
This is a real time saver, especially when analyzing logs in emergency situations. Imagine you have your application logs exported to CloudWatch Logs and S3. You have an issue and need to troubleshoot it. This feature should make it much easier to find root causes. You can have saved queries that you execute from your runbooks repository, for example. Even better, imagine an alarm includes a link to a relevant saved query. It’s 3:03 AM and by the time you’re notified all you have to do is click and you will automatically see relevant, live data, in seconds.
You can do the same for CloudTrail logs, VPCFlowLogs and Billing Reports.
New EC2 instance types and families
New families: R4 (memory optimized), I3 (I/O intensive), C5 (compute intensive), F1 (FPGA acceleration). New instance types: t2.xlarge, t2.2xlarge
What can you do with it: further fine-tune your EC2 workloads
If you’ve ran into limitations with existing EC2 instance types, this is great news. Before you decide to use any of these instance types, I recommend you follow these steps:
- Analyze your application’s resource consumption patterns (is it CPU-intensive, memory-intensive, I/O intensive?).
- Load test your application running on different instance types and calculate cost in each test.
- Analyze test results based on performance and cost.
- Choose the right instance type based on the right balance of performance and price for your application.
Elastic GPUs
Use GPUs just like they were EBS volumes
What can you do with it: save money when you run applications that need GPU processing
Before this feature, you could only have access to GPUs if you launched P2 or G2 instances, which start at $0.65/hour ($468/month) and can go for as much as $14.4/hour ($10,368/month). Even though pricing has not been published for Elastic GPUs, it’s reasonable to assume it will be much lower compared to launching a G2 instance.
Amazon GreenGrass
IoT devices that work even when there’s no internet connectivity
What can you do with it: run fault-tolerant and better performing IoT applications that consume less bandwidth.
This will increase the availability of IoT applications by eliminating internet connectivity interruptions from the equation. It will also allow for pre-processing and local network communications between devices, which can be used to increase performance. Transferring less data over the internet will likely reduce cost.
Amazon X-Ray
Tracing for distributed cloud systems
What can you do with it: easier application and infrastructure fine-tuning and troubleshooting
This is probably my favourite announcement of all. I’ve yet to see it in action, but having enhanced visibility of performance and bottlenecks over multiple AWS services is going to be a game changer in cloud optimization. Since it also supports Lambda functions, this is bringing serverless development and operations to a new level of maturity.
AWS Personalized Health Dashboard
What can you do with it: see if an AWS operational issue is affecting YOU, have a faster response to operational issues.
Up until now, you only had the the AWS Service Health Dashboard to see if there were operational issues happening in AWS. This dashboard contains issues that might or might not affect you. With this new feature, you’ll have a way to create CloudWatch Events that notify you or take action in response to operational issues that are affecting your systems.
Lambda @ Edge
Execute Lambda functions at CloudFront edge locations.
What can you do with it: increase performance when processing content served through CloudFront
This will allow you to execute content transformations at the edge, closer to your users and not in the AWS backend. Your “static” content served through CloudFront won’t be that static anymore! This will increase performance and simplicity of many applications. Operations that could only happen inside an AWS region can now be executed at the CloudFront edge. There are also new use cases that couldn’t be implemented before and now they’re possible.
Dead Letter Queue for Lambda functions
What can you do with it: improve error handling and remediation of Lambda operational issues
Now you can send failed requests to a Dead Letter Queue, where they can be retried or analyzed. This will increase availability of your Lambda-based applications significantly. This is another feature that brings Lambda to a high level of operational maturity.
PostgreSQL for RDS Aurora
What can you do with it: improve performance of PostgreSQL applications
Aurora is a cloud-native database that is making a lot of waves due to its low price and high performance. Up until now, Aurora only offered compatibility with MySQL. So, if your application’s datastore is powered by PostreSQL, now you can consider Aurora.
Code Build
Cloud-based build servers
What can you do with it: automate and make your software delivery cycle faster and less error-prone, including for Lambda functions
If you want to run reliable applications, you have to implement automated software lifecycle pipelines. With CodeBuild, it will be easier and cheaper to compile, package and test your code. This is a proven way to make your applications less error-prone. I also like that CodeBuild bills by the minute, making it more cost-effective compared to running build servers on EC2.
AWS Shield
DDoS protection for your cloud applications
What can you do with it: get more peace of mind
As much as I work to find the best Performance, Price and Availability in cloud deployments, the truth is that without security, none of that stuff matters. That’s why it’s really great news to hear that AWS has your applications covered against DDoS attacks. Even better, this feature is enabled by default, so you don’t have to do anything.
AWS Batch
Run batch jobs in the cloud
What can you do with it: run batch jobs that don’t get stuck
Even if your application starts simple, sooner or later you will have to run asynchronous, long-running tasks of some sort. The thing is that running batch jobs is a pain. I worked in the telecommunications industry for over 10 years and I’m very familiar with long running tasks that process millions of bills, usage records, etc. You have to worry about retries, state management, resource allocation, inputs, outputs and a whole bunch of scenarios that can break very easily. It’s great to hear that AWS released a managed service to run batch jobs - this will certainly improve reliability of long-running asynchronous tasks.
AWS Step Functions
Build Lambda-powered workflows using a GUI
What can you do with it: build fault-tolerant, scalable Lambda-based workflows
Similarly to batch jobs, running multi-step tasks is a pain and can be error prone. It’s great to have a managed orchestration engine that runs Lambda functions, so you don’t have to worry about state management and error handling. This will improve reliability of Lambda-based applications greatly.
The clear winner this year is AWS Lambda
AWS Lambda had great announcements this year. Lambda had been ready for prime time before this re:Invent, but these new features bring it to a whole new level of maturity. The number and impact of these new features will enhance development, test and operations of Lambda-based applications:
- AWS X-Ray. This service will add tracing to Lambda applications. Great for tuning and troubleshooting.
- Step Functions. Build complex use cases using Lambda.
- Lambda @Edge. Run content transformations and logic at the CloudFront edge.
- CodeBuild. Automate packaging and testing of Lambda functions.
- Firehose in-line Lambda. Execute transformations before storing data in S3.
- Greengrass. Lambda functions running locally on IoT devices.
- Lex. Built-in Lambda integration with Amazon Alexa’s “brain”.
- Support for C##. More options for developers.
- Environment Variables. Separate code and configuration in Lambda functions. Announced a few days before re:Invent.
- Serverless Application Model. Model whole serverless environments (event sources, functions, permissions, datastores) using CloudFormation templates. Announced a few days before re:Invent.
- Lambda-based stored procedures for Aurora. This was announced few weeks before re:Invent, but it’s a great addition too.
Other announcements: Lightsail (really simple to launch Virtual Private Servers), Rekognition (image recognition as a service, using AI), Lex (Alexa as a service), Polly (text-to-voice as a service), Snowball Edge, Snowmobile (a big truck that can transport some insane amounts of data to AWS data centers), AWS Pinpoint (targeted SNS messages), OpsWorks for Chef, Glue (data catalog and ETL service), Blox Open Source Container Management Framework, C## support for AWS Lambda.
Did I mention there were a LOT of announcements?