Every developer I know has been through the same cycle. You start a new project, and you need a database. So you download PostgreSQL or MySQL, go through configuration, and forget the password you set five minutes later. A few months down the line, you start another project that needs a different version of that database, or perhaps it needs Redis, and suddenly your host machine is a graveyard of services you installed once and forgot about.
There are many ways to solve this problem (way too many to be honest), but for a personal local dev environment, the simplest and most maintainable solution is to put everything in a single Docker Compose file.
The services most development environments need
When I sit down to work on a web application, there are a few services I almost always need. Most developers rely on a similar set of tools, even if the exact requirements vary from project to project. I need a database to store data, usually PostgreSQL. I often need a key value store like Redis for caching or session management. If I am working on anything involving background jobs or event-driven architecture, I need a message broker like RabbitMQ and for testing email functionality during development, I want something like MailHog that catches outgoing emails and shows them to me in a web interface instead of actually sending them.
We are going to build a Docker Compose file that runs all of these services together. This file will become your portable development toolbox. You can copy it from project to project, tweak the versions, and have a consistent environment every time.
First, create a new directory for your project and inside that directory, create a file named docker-compose.yml. This is the standard name, and that is what the docker compose command looks for by default.
1. Starting with the database
Let us begin with PostgreSQL. Here is a basic service definition for PostgreSQL:
services:
postgres:
image: postgres:18.2
container_name: local_postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: myapp_dev
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
There is a lot going on in this small block, so let me explain each line. The services block is where you list all the containers you want to run. Each service has a name, in this case, postgres.
The image line tells Docker which image to pull from Docker Hub. I have pinned it to postgres:18.2. If you just use postgres without a tag, you get the latest tag, which means your environment could change unexpectedly when a new major version of PostgreSQL is released (it often breaks things).
The container_name is optional but helpful. It sets a fixed name for the container. Without this, Docker would generate a random name like postgres_1234abcd. Having a predictable name makes it easier to run docker commands against that specific container if you need to.
You Should Learn Docker Before Buying a NAS (Here’s How)
Docker can turn your NAS into your own cloud that can host your own apps and automate your home.
The ports line maps a port on your host machine to a port inside the container. The format is “host:container”. PostgreSQL listens on port 5432 inside the container by default. By mapping it to the same port on your host, you can connect to the database using localhost:5432 from any tool running on your laptop, just as if you had installed PostgreSQL natively.
The environment section sets environment variables inside the container. The PostgreSQL image looks for these specific variables to configure the default user, password, and database. I have used simple values here just for the demo.
The volumes section is critical because containers are ephemeral by default. If you delete the container, your data also gets deleted (forever, no trash or recycle bin). The postgres_data: /var/lib/postgresql/data line creates a named volume called postgres_data and mounts it to the path inside the container where PostgreSQL stores its data files. This means your database survives container restarts and even container deletion. We will define the volume at the bottom of the file.
Finally, restart: unless-stopped tells Docker to automatically restart the container if it crashes, but not to restart it if you manually stop it.
2. Adding Redis
Next, let us add Redis. Redis is simple compared to PostgreSQL. It does not have a complex user system or need environment variables for setup.
redis:
image: redis:8.6.1
container_name: local_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
restart: unless-stopped
3. Integrating RabbitMQ
RabbitMQ is a bit more involved because it has a management UI. We want access to that UI in our browser.
rabbitmq:
image: rabbitmq:4.2-management-alpine
container_name: local_rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
volumes:
- rabbitmq_data:/var/lib/rabbitmq
restart: unless-stopped
There are two ports mapped at this time. Port 5672 is the main AMQP port that your application will use to connect to RabbitMQ for sending and receiving messages. Port 15672 is the HTTP port for the web management interface. You can open your browser to http://localhost:15672 and log in with the default credentials guest/guest to see queues, exchanges, and messages.
4. Catching emails with MailHog
MailHog is a tool that runs a fake SMTP server and captures any messages sent to it. It provides a web interface to view those captured emails. It is invaluable for testing registration flows, password resets, and notification emails without accidentally spamming real users.
mailhog:
image: minidocks/mailhog
container_name: local_mailhog
ports:
- "1025:1025"
- "8025:8025"
restart: unless-stopped
At the bottom of your docker-compose.yml file, you need to declare the named volumes you used:
volumes:
postgres_data:
redis_data:
rabbitmq_data:
This tells Docker to create and manage these volumes. Docker stores the actual data somewhere on your host file system, but you do not need to know where.
5. Putting it all together
# Local Dev Toolbox
services:
postgres:
image: postgres:18.2
container_name: local_postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: myapp_dev
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:8.6.1
container_name: local_redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
restart: unless-stopped
rabbitmq:
image: rabbitmq:4.2-management-alpine
container_name: local_rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
volumes:
- rabbitmq_data:/var/lib/rabbitmq
restart: unless-stopped
mailhog:
image: minidocks/mailhog
container_name: local_mailhog
ports:
- "1025:1025"
- "8025:8025"
restart: unless-stopped
volumes:
postgres_data:
redis_data:
rabbitmq_data:
Save that file in your project directory.
Running your toolbox
1. Open a terminal in the directory containing your docker-compose.yml file. To start everything, run:
docker compose up -d
The -d flag runs the containers in detached mode, meaning they run in the background, and you get your terminal prompt back.
2. To see what is running, use:
docker compose ps
This shows you the status of each container, the ports they are mapped to, and how long they have been running.
3. When you are done working for the day, you can stop the services without destroying them:
docker compose stop
This stops the containers but leaves them and their data volumes intact. You can start them again later with docker compose start or docker compose up.
4. If you want to completely remove the containers but keep your data volumes, use:
docker compose down
5. To remove everything, including the volumes and all the data inside them, add the -v flag:
docker compose down -v
Alternatives to Docker Compose
Installing services directly on your system
Is a Compose file the only way to build a local dev toolbox? Of course not. Just as there are countless Linux distributions, there are many ways to create dev stack.
The most traditional approach is to install everything directly on your operating system. You install PostgreSQL from your package manager, enable the service, and move on. In a broader sense, even installing something like build-essential on a Debian-based system is already the beginning of a development stack. You are pulling compilers, headers, and core tooling into your base OS so that software can be compiled and run locally.
This works well for simple cases, but things get messed up when projects start to diverge. If one application requires PostgreSQL 18 and another depends on PostgreSQL 16, you are forced to juggle multiple instances on different ports or reinstall different versions. Managing them as native system services becomes cumbersome. Removing them later is rarely as clean as expected, and configuration files tend to remain scattered across the system.
Version managers such as asdf
Tools like asdf are great for managing language runtimes and make switching between different versions of Python, Node.js, or Ruby very smooth. For application code, this does solve a real problem.
However, databases, message brokers, and mail servers do not fit neatly into that model. They are not lightweight runtime binaries you swap with a shim. They require persistent data directories, background processes, and system resources. You could compile them manually, but that defeats the goal of having a quick and reproducible setup.
Automation tools like Ansible and Taskfiles
Another option is using automation frameworks and task runners. Tools such as Ansible are super powerful for provisioning servers. Similarly, Task runners like Task also allow you to orchestrate complex command sequences.
For a local development machine, though, writing playbooks that ensure PostgreSQL, Redis, and RabbitMQ are installed and properly configured can feel disproportionate to the problem.
I run these Docker containers every day—and I deleted the rest
Some self-hosted apps are life upgrades. Others are maintenance magnets. This is my short list of containers that stayed, plus a few that I abandoned.
Using Docker run directly
The closest technical alternative to Compose is using the raw docker run command. For a single service, this can be clean and efficient. One command starts a container, maps a port, and you are finished.
Complexity increases quickly as more services are added. Port mappings, named volumes for persistent data, environment variables for credentials, restart policies, and custom networks must all be specified manually. Once services need to communicate with one another, remembering every flag becomes error-prone. Let’s be honest, no one can remember those long commands!
After three or four containers, many developers end up copying long commands from a document or wrapping them in a shell script (been there, done that). At that stage, the script resembles a handwritten and less readable version of a docker-compose.yml file.
Why Compose remains practical
Docker Compose converts those long, imperative docker run invocations into a structured, declarative configuration. All services are defined in a single file and networking between containers is created automatically. Networking, even in Compose’s case, can become complex, but that’s a story for another day, because for the most part, you won’t deal with it.
The real advantage of Compose is not just exclusivity but the clarity and reproducibility it brings. For most local development stacks, a Compose file strikes a practical balance between simplicity and control, without permanently shaping your base operating system around every project you experiment with.
Optimize Your Docker Updates With This Trick
Are you still manually updating your Docker containers?
Why containerized dev environments save so much time
Using a Docker Compose file to create a local dev box saves the time you would otherwise spend installing, configuring, and troubleshooting native services. When a project is finished, you can delete the project directory, bring the containers down, and there is no lingering trace left on the system. I am not suggesting that Compose is objectively the best option in all cases. However, in terms of ease of setup and time efficiency, it is hard to beat for developers who are just starting to organize their local environments. Preferences tend to evolve as experience grows and requirements become more specialized.
Over time, my own workflow has shifted. I have gradually moved toward tools like asdf and Podman Pods for certain projects, particularly where I prefer a daemonless and more tightly integrated experience with the host system. The core idea, however, remains the same: group related services, isolate them from the base operating system, and make the entire stack reproducible.
- OS
-
Windows, macOS, Linux
- Brand
-
Docker
