How do I deploy updated Docker images to Amazon ECS tasks?

What is the right approach to make my Amazon ECS tasks update their Docker images, once said images have been updated in the corresponding registry?


Every time you start a task (either through the StartTask and RunTask API calls or that is started automatically as part of a Service), the ECS Agent will perform a docker pull of the image you specify in your task definition. If you use the same image name (including tag) each time you push to your registry, you should be able to have the new image run by running a new task. Note that if Docker cannot reach the registry for any reason (e.g., network issues or authentication issues), the ECS Agent will attempt to use a cached image; if you want to avoid cached images from being used when you update your image, you'll want to push a different tag to your registry each time and update your task definition correspondingly before running the new task.

Update: This behavior can now be tuned through the ECS_IMAGE_PULL_BEHAVIOR environment variable set on the ECS agent. See the documentation for details. As of the time of writing, the following settings are supported:

The behavior used to customize the pull image process for your container instances. The following describes the optional behaviors:

  • If default is specified, the image is pulled remotely. If the image pull fails, then the container uses the cached image on the instance.

  • If always is specified, the image is always pulled remotely. If the image pull fails, then the task fails. This option ensures that the latest version of the image is always pulled. Any cached images are ignored and are subject to the automated image cleanup process.

  • If once is specified, the image is pulled remotely only if it has not been pulled by a previous task on the same container instance or if the cached image was removed by the automated image cleanup process. Otherwise, the cached image on the instance is used. This ensures that no unnecessary image pulls are attempted.

  • If prefer-cached is specified, the image is pulled remotely if there is no cached image. Otherwise, the cached image on the instance is used. Automated image cleanup is disabled for the container to ensure that the cached image is not removed.

If your task is running under a service you can force a new deployment. This forces the task definition to be re-evaluated and the new container image to be pulled.

aws ecs update-service --cluster <cluster name> --service <service name> --force-new-deployment

Registering a new task definition and updating the service to use the new task definition is the approach recommended by AWS. The easiest way to do this is to:

  1. Navigate to Task Definitions
  2. Select the correct task
  3. Choose create new revision
  4. If you're already pulling the latest version of the container image with something like the :latest tag, then just click Create. Otherwise, update the version number of the container image and then click Create.
  5. Expand Actions
  6. Choose Update Service (twice)
  7. Then wait for the service to be restarted

This tutorial has more detail and describes how the above steps fit into an end-to-end product development process.

Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.

I created a script for deploying updated Docker images to a staging service on ECS, so that the corresponding task definition refers to the current versions of the Docker images. I don't know for sure if I'm following best practices, so feedback would be welcome.

For the script to work, you need either a spare ECS instance or a deploymentConfiguration.minimumHealthyPercent value so that ECS can steal an instance to deploy the updated task definition to.

My algorithm is like this:

  1. Tag Docker images corresponding to containers in the task definition with the Git revision.
  2. Push the Docker image tags to the corresponding registries.
  3. Deregister old task definitions in the task definition family.
  4. Register new task definition, now referring to Docker images tagged with current Git revisions.
  5. Update service to use new task definition.

My code pasted below:

#!/usr/bin/env python3
import subprocess
import sys
import os.path
import json
import re
import argparse
import tempfile

_root_dir = os.path.abspath(os.path.normpath(os.path.dirname(__file__)))
sys.path.insert(0, _root_dir)
from _common import *

def _run_ecs_command(args):
    run_command(['aws', 'ecs', ] + args)

def _get_ecs_output(args):
    return json.loads(run_command(['aws', 'ecs', ] + args, return_stdout=True))

def _tag_image(tag, qualified_image_name, purge):
    log_info('Tagging image \'{}\' as \'{}\'...'.format(
        qualified_image_name, tag))
    log_info('Pulling image from registry in order to tag...')
        ['docker', 'pull', qualified_image_name], capture_stdout=False)
    run_command(['docker', 'tag', '-f', qualified_image_name, '{}:{}'.format(
        qualified_image_name, tag), ])
    log_info('Pushing image tag to registry...')
    run_command(['docker', 'push', '{}:{}'.format(
        qualified_image_name, tag), ], capture_stdout=False)
    if purge:
        log_info('Deleting pulled image...')
            ['docker', 'rmi', '{}:latest'.format(qualified_image_name), ])
            ['docker', 'rmi', '{}:{}'.format(qualified_image_name, tag), ])

def _register_task_definition(task_definition_fpath, purge):
    with open(task_definition_fpath, 'rt') as f:
        task_definition = json.loads(

    task_family = task_definition['family']

    tag = run_command([
        'git', 'rev-parse', '--short', 'HEAD', ], return_stdout=True).strip()
    for container_def in task_definition['containerDefinitions']:
        image_name = container_def['image']
        _tag_image(tag, image_name, purge)
        container_def['image'] = '{}:{}'.format(image_name, tag)

    log_info('Finding existing task definitions of family \'{}\'...'.format(
    existing_task_definitions = _get_ecs_output(['list-task-definitions', ])[
    for existing_task_definition in [
        td for td in existing_task_definitions if re.match(
        log_info('Deregistering task definition \'{}\'...'.format(
            'deregister-task-definition', '--task-definition',
            existing_task_definition, ])

    with tempfile.NamedTemporaryFile(mode='wt', suffix='.json') as f:
        task_def_str = json.dumps(task_definition)
        log_info('Registering task definition...')
        result = _get_ecs_output([
            '--cli-input-json', 'file://{}'.format(,

    return '{}:{}'.format(task_family, result['taskDefinition']['revision'])

def _update_service(service_fpath, task_def_name):
    with open(service_fpath, 'rt') as f:
        service_config = json.loads(
    services = _get_ecs_output(['list-services', ])[
    for service in [s for s in services if re.match(
        log_info('Updating service with new task definition...')
            'update-service', '--service', service,
            '--task-definition', task_def_name,

parser = argparse.ArgumentParser(
    description="""Deploy latest Docker image to staging server.
The task definition file is used as the task definition, whereas
the service file is used to configure the service.
    'task_definition_file', help='Your task definition JSON file')
parser.add_argument('service_file', help='Your service JSON file')
    '--purge_image', action='store_true', default=False,
    help='Purge Docker image after tagging?')
args = parser.parse_args()

task_definition_file = os.path.abspath(args.task_definition_file)
service_file = os.path.abspath(args.service_file)


task_def_name = _register_task_definition(
    task_definition_file, args.purge_image)
_update_service(service_file, task_def_name)
import sys
import subprocess

__all__ = ['log_info', 'handle_error', 'run_command', ]

def log_info(msg):
    sys.stdout.write('* {}\n'.format(msg))

def handle_error(msg):
    sys.stderr.write('* {}\n'.format(msg))

def run_command(
        command, ignore_error=False, return_stdout=False, capture_stdout=True):
    if not isinstance(command, (list, tuple)):
        command = [command, ]
    command_str = ' '.join(command)
    log_info('Running command {}'.format(command_str))
        if capture_stdout:
            stdout = subprocess.check_output(command)
            stdout = None
    except subprocess.CalledProcessError as err:
        if not ignore_error:
            handle_error('Command failed: {}'.format(err))
        return stdout.decode() if return_stdout else None

AWS CodePipeline.

You can set ECR as a source, and ECS as a target to deploy to.

Using AWS cli I tried aws ecs update-service as suggested above. Did not pick up latest docker from ECR. In the end, I rerun my Ansible playbook that created the ECS cluster. The version of the task definition is bumped when ecs_taskdefinition runs. Then all is good. The new docker image is picked up.

Truthfully not sure if the task version change forces the redeploy, or if the playbook using the ecs_service causes the task to reload.

If anyone is interested, I'll get permission to publish a sanitized version of my playbook.

well i am also trying to find an automated way of doing it, That is push the changes to ECR and then latest tag should be picked up by service. Right you can do it manually by Stopping the task for your service from your cluster. New tasks will pull the updated ECR containers .

The following commands worked for me

docker build -t <repo> . 
docker push <repo>
ecs-cli compose stop
ecs-cli compose start

Need Your Help

What should the penalty/response for missing a deadline be?

project-management development-process

Being relatively new to the software industry I have come across a question of deadline enforcement:

'invalid value encountered in double_scalars' warning, possibly numpy

python numpy warnings matplotlib

As I run my code I get these warnings, always in groups of four, sporadically. I have tried to locate the source by placing debug messages before and after certain statements to pin-point its origi...