Volumes and Local Development
Streamline local development workflows with bind mounts.
With the concepts we've explored so far, running a containerized service with Docker when trying to make code changes would be incredibly inefficient. To test every code change, a developer would have to
- Make the code change
- Stop the container (
docker rm -f ...)
- Rebuild the image (
docker build ...)
- Run the container (
docker run ...)
In particular, rebuilding an image with
docker build takes too long. We're going to explore how to make things much more efficient.
A Docker container has no built in persistence. Beyond what is built in to an image during the build phase, all other data is discarded when a container is destroyed. Without volumes, running databases or applications that require state would be impossible.
Volumes provide a persistent storage mechanism that can be managed through Docker's CLI. They can be shared between multiple containers and can be stored on remote hosts or cloud providers.
Consider the following example:
docker volume create my-vol docker run -v my-vol:/data my-image
The above commands create a volume called
my-vol and on container creation is mounted to the
/data directory. As the application runs, the contents of
/data are persisted in the volume so that when the container is destroyed, the data is not lost.
Bind mounts are similar to volumes, but in comparison have limited functionality. Instead of creating a named volume, the contents of a host machine's directory can be mounted to a directory in a container.
For example the following command will overwrite the contents of the
/data directory in the container with the contents of
/Users/connor/data. When the container is running, the contents are mirrored between the two locations regardless if a change originates from the container or the host machine.
docker run -v /Users/connor/data:/data my-image
The only catch is the host machine directory must be an absolute path.
The most common use case for bind mounts is in local development environments. Instead of rebuilding an image on every code change, we can mount our application's
src/ directory to the
/app/src directory of our container.
Open a terminal at the root of your application and run
docker run --name my-container -p 8000:8000 -d -v $PWD/src:/app/src my-node-app:latest
Verify it's working by going to
localhost:8000 in your browser. You should see
src/index.js and change line 6 from
save the file and refresh your browser. It should now say
What's happening is the nodemon process is watching the
src/ directory in the container for changes. Because our host-machine's
src/ directory is mounted to the container's
app/src/ directory (
-v $PWD/src:/app/src), when we save the change to
index.js, it is replicated in the container causing nodemon to restart the server. We can see this in the container logs:
$ docker logs my-container > email@example.com start > nodemon src/index.js [nodemon] 2.0.6 [nodemon] to restart at any time, enter `rs` [nodemon] watching path(s): *.* [nodemon] watching extensions: js,mjs,json [nodemon] starting `node src/index.js` Example app listening at http://localhost:8000 [nodemon] restarting due to changes... [nodemon] starting `node src/index.js` Example app listening at http://localhost:8000
This approach means we only need to build an image once