Amazon Elastic Inference Now Available In Amazon ECS Tasks

Posted on: Sep 17, 2019

Amazon ECS supports attaching Amazon Elastic Inference accelerators to your containers to make running deep learning inference workloads more cost-effective. Amazon Elastic Inference allows you to attach just the right amount of GPU-powered acceleration to any Amazon EC2 or Amazon SageMaker instance, or ECS task, to reduce the cost of running deep learning inference by up to 75%.

With Amazon Elastic Inference support in ECS, you can now choose the task CPU and memory configuration that is best suited to the needs of your application, and then separately configure the amount of inference acceleration that you need with no code changes. This allows you to use resources efficiently and to reduce the cost of running inference. This feature is supported when using Linux containers and tasks that use the EC2 launch type. Amazon Elastic Inference supports TensorFlow, Apache MXNet, and ONNX models, with more frameworks coming soon.

Amazon Elastic Inference support in ECS is available in all regions where ECS and Elastic Inference are available. To get started, view our documentation