How to mount local volumes in docker machine

I am trying to use docker-machine with docker-compose. The file docker-compose.yml has definitions as follows:

  build: .
  command: ./
    - .:/app
    - "8000:8000"
    - db:db
    - rabbitmq:rabbit
    - redis:redis

When running docker-compose up -d all goes well until trying to execute the command and an error is produced:

Cannot start container b58e2dfa503b696417c1c3f49e2714086d4e9999bd71915a53502cb6ef43936d: [8] System error: exec: "./": stat ./ no such file or directory

Local volumes are not mounted to the remote machine. Whats the recommended strategy to mount the local volumes with the webapps' code?


Also ran into this issue and it looks like local volumes are not mounted when using docker-machine. A hack solution is to

  1. get the current working directory of the docker-machine instance docker-machine ssh <name> pwd

  2. use a command line tool like rsync to copy folder to remote system

    rsync -avzhe ssh --progress <name_of_folder> username@remote_ip:<result _of_pwd_from_1>.

The default pwd is /root so the command above would be rsync -avzhe ssh --progress <name_of_folder> username@remote_ip:/root

NB: you would need to supply the password for the remote system. You can quickly create one by ssh into the remote system and creating a password.

  1. change the volume mount point in your docker-compose.yml file from .:/app to /root/<name_of_folder>:/app

  2. run docker-compose up -d

NB when changes are made locally, don't forget to rerun rsync to push the changes to the remote system.

Its not perfect but it works. An issue is ongoing

Other project that attempt to solve this include docker-rsync

Docker-machine automounts the users directory... But sometimes that just isn't enough.

I don't know about docker 1.6, but in 1.8 you CAN add an additional mount to docker-machine

Add Virtual Machine Mount Point (part 1)

CLI: (Only works when machine is stopped)

VBoxManage sharedfolder add <machine name/id> --name <mount_name> --hostpath <host_dir> --automount

So an example in windows would be

/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe sharedfolder add default --name e --hostpath 'e:\' --automount

GUI: (does NOT require the machine be stopped)

  1. Start "Oracle VM VirtualBox Manager"
  2. Right-Click <machine name> (default)
  3. Settings...
  4. Shared Folders
  5. The Folder+ Icon on the Right (Add Share)
  6. Folder Path: <host dir> (e:)
  7. Folder Name: <mount name> (e)
  8. Check on "Auto-mount" and "Make Permanent" (Read only if you want...) (The auto-mount is sort of pointless currently...)

Mounting in boot2docker (part 2)

Manually mount in boot2docker:

  1. There are various ways to log in, use "Show" in "Oracle VM VirtualBox Manager", or ssh/putty into docker by IP address docker-machine ip default, etc...
  2. sudo mkdir -p <local_dir>
  3. sudo mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>

But this is only good until you restart the machine, and then the mount is lost...

Adding an automount to boot2docker:

While logged into the machine

  1. Edit/create (as root) /mnt/sda1/var/lib/boot2docker/, sda1 may be different for you...
  2. Add

    mkdir -p <local_dir>
    mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>

With these changes, you should have a new mount point. This is one of the few files I could find that is called on boot and is persistent. Until there is a better solution, this should work.

Old method: Less recommended, but left as an alternative

  • Edit (as root) /mnt/sda1/var/lib/boot2docker/profile, sda1 may be different for you...
  • Add

    add_mount() {
      if ! grep -q "try_mount_share $1 $2" /etc/rc.d/automount-shares ; then
        echo "try_mount_share $1 $2" >> /etc/rc.d/automount-shares
    add_mount <local dir> <mount name>

As a last resort, you can take the slightly more tedious alternative, and you can just modify the boot image.

  • git -c core.autocrlf=false clone
  • cd boot2docker
  • git -c core.autocrlf=false checkout v1.8.1 #or your appropriate version
  • Edit rootfs/etc/rc.d/automount-shares
  • Add try_mount_share <local_dir> <mount_name> line right before fi at the end. For example

    try_mount_share /e e

    Just be sure not to set the to anything the os needs, like /bin, etc...

  • docker build -t boot2docker . #This will take about an hour the first time :(
  • docker run --rm boot2docker > boot2docker.iso
  • Backup the old boot2docker.iso and copy your new one in its place, in ~/.docker/machine/machines/

This does work, it's just long and complicated

docker version 1.8.1, docker-machine version 0.4.0

At the moment I can't really see any way to mount volumes on machines, so the approach by now would be to somehow copy or sync the files you need into the machine.

There are conversations on how to solve this issue on the docker-machine's github repo. Someone made a pull request implementing scp on docker-machine and it's already merged on master, so it's very likely that the next release will include it.

Since it's not yet released, by now I would recommend that if you have your code hosted on github, just clone your repo before you run the app

  build: .
  command: git clone; ./repo/
    - .:/app
    - "8000:8000"
    - db:db
    - rabbitmq:rabbit
    - redis:redis

Update: Looking further I found that the feature is already available in the latest binaries, when you get them you'll be able to copy your local project running a command like this:

docker-machine scp -r . dev:/home/docker/project

Being this the general form:

docker-machine scp [machine:][path] [machine:][path]

So you can copy files from, to and between machines.


If you choose the rsync option with docker-machine, you can combine it with the docker-machine ssh <machinename> command like this:

rsync -rvz --rsh='docker-machine ssh <machinename>' --progress <local_directory_to_sync_to> :<host_directory_to_sync_to>

It uses this command format of rsync, leaving HOST blank:



Since October 2017 there is a new command for docker-machine that does the trick, but make sure there is nothing in the directory before executing it, otherwise it might get lost:

docker-machine mount <machine-name>:<guest-path> <host-path>

Check the docs for more information:

PR with the change:

I assume the file is in the same directory as your docker-compose.yml file. Then the command should be command: /app/

Unless the Dockerfile (that you are not disclosing) takes care of putting the file into the Docker image.

After summarize posts here, attached updated script, to create additional host mount point and automount when Virtualbox restart. The working environment brief as below: - Windows 7 - docker-machine.exe version 0.7.0 - VirtualBox 5.0.22

    #!env bash

    : ${NAME:=default}
    : ${SHARE:=c/Proj}
    : ${MOUNT:=/c/Proj}
    : ${VBOXMGR:=C:\Program Files\Oracle\VirtualBox\VBoxManage.exe}

    ## set -x
    docker-machine stop $NAME
    "$VBOXMGR" sharedfolder add $NAME --name c/Proj --hostpath 'c:\' --automount 2>/dev/null || :
    docker-machine start $NAME
    docker-machine env $NAME

    docker-machine ssh $NAME 'echo "mkdir -p $MOUNT" | sudo tee $SCRIPT'
    docker-machine ssh $NAME 'echo "sudo mount -t vboxsf -o rw,user $SHARE $MOUNT" |  sudo tee -a $SCRIPT'
    docker-machine ssh $NAME 'sudo chmod +x /mnt/sda1/var/lib/boot2docker/'
    docker-machine ssh $NAME 'sudo /mnt/sda1/var/lib/boot2docker/'
    #docker-machine ssh $NAME 'ls $MOUNT'

Finally figured out how to upgrade Windows Docker Toolbox to v1.12.5 and keep my volumes working by adding a shared folder in Oracle VM VirtualBox manager and disabling path conversion. If you have Windows 10+ then you're best to use the newer Docker for Windows.

1st the upgrade Pain:

  1. Uninstall VirtualBox first.
    • Yep that may break stuff in other tools like Android Studio. Thanks Docker :(
  2. Install new version of Docker Toolbox.

Redis Database Example: redis: image: redis:alpine container_name: redis ports: - "6379" volumes: - "/var/db/redis:/data:rw"

In Docker Quickstart Terminal ....

  1. run docker-machine stop default - Ensure VM is haulted

In Oracle VM VirtualBox Manager ...

  1. Added a shared folder in default VM via or command line
    • D:\Projects\MyProject\db => /var/db

In docker-compose.yml...

  1. Mapped redis volume as: "/var/db/redis:/data:rw"

In Docker Quickstart Terminal ....

  1. Set COMPOSE_CONVERT_WINDOWS_PATHS=0 (for Toolbox version >= 1.9.0)
  2. run docker-machine start default to restart the VM.
  3. cd D:\Projects\MyProject\
  4. docker-compose up should work now.

Now creates redis database in D:\Projects\MyProject\db\redis\dump.rdb

Why avoid relative host paths?

I avoided relative host paths for Windows Toolbox as they may introduce invalid '\' chars. It's not as nice as using paths relative to docker-compose.yml but at least my fellow developers can easily do it even if their project folder is elsewhere without having to hack the docker-compose.yml file (bad for SCM).

Original Issue

FYI ... Here is the original error I got when I used nice clean relative paths that used to work just fine for older versions. My volume mapping used to be just "./db/redis:/data:rw"

ERROR: for redis Cannot create container for service redis: Invalid bind mount spec "D:\\Projects\\MyProject\\db\\redis:/data:rw": Invalid volume specification: 'D:\Projects\MyProject\db\redis:/data

This breaks for two reasons ..

  1. It can't access D: drive
  2. Volume paths can't include \ characters
    • docker-compose adds them and then blames you for it !!
    • Use COMPOSE_CONVERT_WINDOWS_PATHS=0 to stop this nonsense.

I recommend documenting your additional VM shared folder mapping in your docker-compose.yml file as you may need to uninstall VirtualBox again and reset the shared folder and anyway your fellow devs will love you for it.

I am using docker-machine 0.12.2 with the virtualbox drive on my local machine. I found that there is a directory /hosthome/$(user name) from where you have access to local files.

Just thought I'd mention I've been using 18.03.1-ce-win65 (17513) on Windows 10 and I noticed that if you've previously shared a drive and cached the credentials, once you change your password docker will start having the volumes mounted within containers as blank.

It gives no indication that what is actually happening is that it is now failing to access the shared with the old cached credentials. The solution in this scenario is to reset the credentials either through the UI (Settings->Shared drives) or to disable then renable drive sharing and enter the new password.

It would be useful if docker-compose gave an error in these situations.

Need Your Help

Super Dev mode in GWT

gwt eclipse-plugin gwt-rpc gwt2 gwt-super-dev-mode

I'm new to gwt. I don't know how to start up Super Dev mode. I need the detailed explanation step by step.

How to call a function within $(document).ready from outside it

javascript jquery function

How do you call function lol() from outside the $(document).ready() for example: