Soon after its initial 2013 release, Docker became a frequent topic in my client discussions. By 2016, even the biggest enterprises are exploring Docker. No longer is it just younger companies like Yelp. Mature enterprises like Verizon have publicized their work with Docker. What do enterprise executives see in Docker? Some believe Docker could usher in an era of harmony between application developers and IT operations. Others believe Docker will help lower costs, solve vendor lock-in and enable their hybrid cloud strategy. Can Docker really help and enterprise achieve all this, today?
Adoption Begins with Three Key Developer Benefits
Docker’s momentum did not start with executive initiatives but rather with grass-roots developer adoption. Why? Docker makes it easier for developers to do their jobs. Before Docker, developers had to troubleshoot defects caused by subtle differences in the configuration of different environments, e.g. dev, QA, staging and production. Docker addresses the problems with inconsistent environments. Furthermore, Docker addresses a second major developer challenge, long provisioning times. A third benefit comes as a result of the Dockerfile. Docker simplifies the working relationship between developer and IT operations by packaging both the application and runtime environment into one Dockerfile. By doing so, Docker clarifies the responsibilities and improves communication between developers and IT operations.
How and When Should Enterprises Adopt Containers?
Many enterprises will adopt Docker or similar container technology. Sure, there are objections to containers, especially about security. We can, however, expect the fast-growing ecosystem of new vendors and open source projects to solve today’s issues. The more important questions are how and when enterprises should adopt containers.
How will enterprises adopt Docker? Will they embrace Docker for container-optimized applications? Or will they create long-running containers for legacy applications? Perhaps we can learn from enterprise exploration of OpenStack. While working on OpenStack, I observed that most enterprises wanted to run legacy applications. They also asked for virtualization capabilities that VMware provided like fault tolerance and live migration. The original intent of OpenStack, however, was to support cloud native applications and not legacy ones.
Likewise, the original intent of Docker is to support cloud native applications. Two differences between container-optimized applications and their virtualized counterparts are the size and life span of the components. Many Docker experts will say that containers are best for short-lived, small microservices.
Thought-leaders like Martin Fowler believe that containers should be immutable. Fowler uses a snowflake versus phoenix analogy to explain the difference between the servers we typically create today and immutable ones. We upgrade and reconfigure “Snowflake servers”, causing them to become unique and fragile. “Phoenix Servers”, as in the X-men comics, are destroyed and reborn in their updated form. In the “phoenix” or immutable service model, we destroy and redeploy servers each time the software changes.
What do you think? Will enterprises use containers for short-lived, immutable services? Or will enterprises use containers for long-running legacy applications? Here are some opinions.
Docker in Production: Lessons From the Trenches – Heroku-style 12 factor apps are the easiest to Dockerize since they do not maintain state. In an ideal microservices environment, containers can start and stop within milliseconds without impacting the health of the cluster or state of the application.
Contain yourself: The layman’s guide to Docker– But Docker isn’t for every application — some have too many dependencies or too many complexities to be neatly packaged up — especially legacy apps. Newer apps, designed in the first place to be run at web scale, tend to fare better: WordPress, MySQL, Redis, and Nginx are among the most popular images at the central repository hub.
12 Fractured Apps – what most people end up doing is treating Docker containers like VMs, resulting in 2GB container images built on top of full blown Linux distros
Enterprises also want a way to easily move applications between multiple cloud services. Can they do that with Docker? For simple one-host deployments of stateless applications, portability seems to work well. There are, however, challenges to be addressed such as networking and storage. See these comments about vendors who are working on improving portability.
Cliqr: Portable and Secure Containerized Applications… – Container networking, security, and portability are simple enough when all services are within one container or when all containers are on the same host. Docker containers are designed to be portable but have some limiting constraints that require a rare type of application where all services live within just one container and/or all of the containers reside on a single host. However, spanning the application communication across containers on different hosts, is where networking challenges start to arise.
Flocker, A Nascent Docker Service for Making Containers Portable, Data and All – The developers of Flocker have started building a service that tackles the complex problem of making containers truly portable, data and all. Moving containers is one thing. But moving the containers with the data is a different matter. Flocker is ClusterHQ’s attempt to make Docker a production ready service for an entirely different class of apps that contain multiple types of images.
A guide to Docker container networking – Containers hosted on the same physical server can interact with one another and share data. But Docker developers didn’t initially build in the ability to migrate a container from one host to another, or connect one container to another on a different host. The networking issues led Docker in March 2015 to buy startup SocketPlane…
One way to assess the maturity of Docker technology is to look at what vendors and open-source projects are working on improving Docker. Companies like CoreOS, Mesosphere and ClusterHQ are adding capabilities like container orchestration and support for stateful applications. Docker, Inc. is also making rapid and significant changes. They announced major networking changes with version 1.9 in November, 2015. Suffice to say, there is a lot of new change in this space. Here are some highlights.
Scheduling, Cluster Management and Orchestration
- CoreOS fleet: scheduler and cluster management tool
- Mesos marathon: scheduler and service management tool
- Docker swarm: scheduler and service management tool
- Apache mesos: host abstraction service that consolidates host resources for the scheduler
- kubernetes: advanced scheduler capable of managing container groups.
- Docker compose: container orchestration tool for creating container groups.
- Hashicorp nomad: scheduler and service management tool that is more general purpose
- flynn: scheduler and service management tool
- ClusterHQ flocker: data volume manager to manage stateful services
- CoreOS flannel: Overlay network providing each host with a separate subnet.
- weave: Overlay network portraying all containers on a single network.
- pipework: Advanced networking toolkit for arbitrarily advanced networking configurations.
- CoreOS etcd: service discovery / globally distributed key-value store
- consul: service discovery / globally distributed key-value store
- Apache zookeeper: service discovery / globally distributed key-value store
- crypt: project to encrypt etcd entries
- confd: watches key-value store for changes and triggers reconfiguration of services with new values
- SmartStack: service discovery and registration
- Sensu: monitoring framework
Conclusion, comparing the adoption of Docker and OpenStack
In late 2013, a small company that I had met with became intrigued by another hot open-source project, OpenStack. The company’s developers, however, did not find use cases for OpenStack. By early 2014, they gravitated to and started using Docker. In 2016, many enterprises have worked on OpenStack projects. Few have fully adopted it. Will they shift their focus now to Docker?
Both Docker and OpenStack may succeed and can certainly be integrated in a cloud service. Both projects help developers. OpenStack allows IT operations to provide better service for developers. Docker allows developers to improve their own experience. Docker will likely be more rapidly adopted because its value is more direct. Docker, however, still has a lot of room to grow. It will be interesting to see if enterprises push the Docker ecosystem to add capabilities for legacy applications. Or, perhaps enterprises will use Docker for new services and re-factored legacy applications.