Should you use Docker for local development?
I've worked for many companies over the years since I'm in the field of consulting/contracting. It allowed me to see how teams do things differently and the pros/cons of each approach. In this blog, I want to share some of my observations on using Docker for local development.
Firstly, I just want to put it out there that I think Docker is great and helps to make deployments much more reliable by providing a consistent environment to build and run apps. The ability to use the same environment in CI and Production is amazing, it helps to catch issues that otherwise are missed.
So do I think teams should use Docker for local development?
Unfortunately, no, I don't believe most teams should be using Docker for development or that they should move away from it as soon as possible.
1. Slow performance
When I used Docker to run an application, I never thought "oh yeah, this makes the app run much faster". Nope, it is always the opposite - a lot slower! Running a decent sized application inside docker will be noticeably slower since docker containers have limited resources than the OS on your machine. I don't doubt it is nice for someone else to write all the config and script to spin up everything for you, while you just kick back and relax as one command bring everything online.
As a developer, I think we all understand the benefits and drawbacks of too much presumption. What makes your life easier also hides aways some customisation options which are useful tools in our toolkit. As a software engineer, I believe the more important thing here is to be intentional. Using something for the sake of using it isn't a good reason, neither is using Docker when it isn't required. Why trade something like performance and longer load time for what Docker could offer?
2. Additional friction and another point of failure
I find any codebase that uses Docker a lot less intuitive to see what's going on. It is the same as calling custom script as part of
npm start or
yarn start. It is another step in the process, and this step is abstracted away for better or for worse. For sure do it, if your codebase is unique in that way and requires tens of lines of node scripts to set up and run (in case of custom scripts). But most of the time that just means something isn't right about the codebase. I'd look to solve the root problem instead of bolt-on something that just fixes the symptoms. Similarly, I think the same with using Docker for local development.
Docker normally stays out of the way, but like everything else, it also requires maintenance and sometimes causes problems too. I can recall times where I just wanted to run an app quickly to check something but instead had to spend time on debugging docker issues. Maybe someone changed something and you just had to pull changes and rebuild the docker image. Or maybe it's something more complex, now the question becomes how far down the rabbit hole are you willing to go?
The fact all the Dockerfile used to spin up a basic node app is never the same is almost comical, somehow we all seem to want to use different base images, different library installations, different mount directories, etc everyone else. I am just as guilty and as any developer when it comes to this. At the same time, I'd like to argue we are never set up for success. How often do you find Dockerfile guideline documentation in a company? I don't even believe there is a consistent way agreed by the wider community.
Then there is the release process for Docker images, some teams have dedicated CI whilst others just let you release to Dockerhub (assuming you don't need to chase people for permission). I've yet to see this managed in a way that screamed consistency or good practice. All this is often decided by one person, often new to the company and want to set a "standard way of working" that ends up introduces more disagreements. Normally DevOps teams create the initial Dockerfiles, then they don't have the time to maintain everything and teams ignore the Dockerfiles until something breaks.
I'm a big fan of simple is better or KISS for those who like acronyms. I just think sometimes people having local issues isn't the way to introduce a solution that makes everyone's life a bit harder. Many of these issues can be resolved by being better at pairing and writing better docs.
3. Difficult to debug
Dockerfiles are often written to be used in the Production environment, so debugging is turned off by default. Maybe we can change the Dockerfile, build it locally then use that image to run the app with debugging turned on? Doesn't that sound massively exhausting? But yeah, I admit that's a solution. How about in situations where the repo doesn't include the Dockerfile it needs, instead it pulls Production docker images to run the app locally. It turns out if you want to modify the docker image, you need to go on a treasure hunt and find it in another repo. I've even seen developers host company docker images' code on their own Github account. Maybe these are extreme examples, but they all happened to me.
My point is sometimes we need to run the app slightly differently to normal when debugging issues or trying to narrow down the problem. This isn't the time to modify Dockerfiles, rebuild images and running new docker contains.
But how can I easily spin up the whole app without docker?
The right answer is here is we don't. Let's skip the argument of why monolith is bad, I assume most people agree with the fact that it is bad and should be avoided at all cost. If you are working on a tiny slice of the codebase, then the ideal scenario is to only spin up that slice (in a micro-frontend setup), run that slice locally and proxy everything else to your dev or staging environment. Since we don't care about the rest of the application in this scenario. If there is concern that changing this slice might break another app, then perhaps there should be more integration in CI, have more decoupled apps or more contract testing between apps.
If the feature requires working on multiple apps to deliver. I'd argue each app should be worked on one at a time, changes deployed to Production under a feature flag or released in a way that it doesn't introduce a breaking change. Why? Assuming each app has separate CI/CD (if it doesn't then it should, it's 2021, there are no good reasons to group apps and do big bang releases), if there are inter-dependencies how can you guarantee the release to Production doesn't cause a problem if one app got released before another. Implement incremental change and then deprecate is a much better approach for situations like this. Unfortunately, many teams are either not working in codebases that allow them to do this, or the company culture is to do edit multiple apps and release in a big bang fashion.
I think the goal of local development is never to code in the Production runtime environment. We already have CI and CD pipeline to detect and spot any errors caused by the difference in setup or environment configuration. Not to mention Staging or QA environments dedicated to catching any issues that are otherwise missed by automated tests. Although it sounds useful, however, I'm not convinced we need to require everyone to run applications through Docker.
As developers, we need and are entitled to fast feedback cycles for either TDD or to write iterative code. It is an incredibly stressful feeling to be constantly taken out of the zone because the computer becomes the bottleneck and a liability. All the companies I work for say they want their developer to be happy. They understand happy developers are more productive. In my opinion, using Docker for local development isn't the path to developer happiness.
Just a short disclaimer, I'm not saying don't ever use Docker for local development. If there are serious benefits and everything has been weighted and Docker came out on top, then sure go for it! I just believe it shouldn't be used unless there are really good reasons, and definitely shouldn't be the default option of running applications locally.