DockerCon 2016: Docker nails iterative open source development

Yesterday DockerCon 2016 kicked off from rainy Seattle with a brilliant keynote lead by CEO Ben Golub and CTO Solomon Hykes. Hykes talked about how Docker's goal is to make tools of mass innovation, to remove as much friction as possible from the development workflow. One such example is Docker for Mac and Windows. Anand Prasad came on stage as an "First day on the job" developer to demonstrate how Docker for Mac could help him get up and running, debugging and committing code - 10 minutes in on the new job.

As a beta user of Docker for Mac, I am really happy with how simple Docker has made this use case. With bringing Native Docker (or at least close to native) I have been able to bypass the Vagrant machine I have been using for running Docker - saving me a lot of overhead. Docker for Mac also gives me the option to run apps and throw them away when done. Great for my laptop. No more clutter with different MySQL databases in the same instance, just run the entire database in it's own container.

The big "wow" experience came with the introduction of the new built-in orchestration. It is now possible to add hosts to a cluster and deploy containers to this cluster - with as little as 3 CLI commands, with just the Docker-Engine installed. Switch out the familiar run command with service and you are all set. This cluster is secure, self-healing and load balanced out of the box. This is in many ways what Docker Swarm should have been from the start. The new orchestration tool is also a great example of the mantra to remove friction from the development process. Docker has become really good at fixing the really hard parts and hiding them behind a simple CLI.

What the keynote shows is how good Docker is at doing Iterative Open Source development out in the open. When Docker first released it was a much simpler way to use namespacing and cgroups to isolate processes from each other. Great for testing and Linux based development, making it possible to set up a reproducible application. With the containers (or building blocks) came the need for orchestration. What use is a hundred containerized apps if it is a pain to orchestrate them? Since Docker has been Open Source from day one, many problems has been addressed by others. The Fig Project (now Docker Compose) solved orchestration on the development level. Docker acquired the team behind Fig and put them in charge for development of Docker Compose.

Next up came the split of the Docker application into the Docker Engine, the Client and the Machine. The Engine is the part that runs Docker, the client talks to it via the CLI. Docker Machine lets you install the Docker-Engine on multiple hosts and controls them with the client, making deployment on remote hosts much easier. The final product to come out of last year's DockerCon was Docker Swarm. Swarm enables multiple Docker Machines to function together as a cluster.

These products are results of the Docker iteration process. By attacking different challenges one at the time, Docker gives us tools that are adapted to specific needs. These tools can be built upon in the next iteration to tackle another challenge by another group of people. There would not have been a Docker-Engine or Docker-Compose without Docker itself. Docker Machine could not have been made without the Docker Engine. No Machine, no Swarm. Without Swarm? Certainly no Docker-Cloud.