
Containerize a nodejs app and nginx with docker on Ubuntu 22.04
What we need:
- docker: Docker is a tool for building, running, and managing containers. A container is a lightweight, isolated environment that packages an application and all its dependencies (Docker application).
- docker-compose: Docker Compose is a tool for defining and running multi-container Docker applications using a single file called docker-compose.yml.
- node.js: Node.js is a JavaScript runtime that lets you run JavaScript outside the browser, typically on the server. It’s built on Chrome’s V8 engine, and it’s great for building fast, scalable network applications, like APIs or web servers.
- npm: npm stands for Node Package Manager. It’s a tool that comes with Node.js and is used to Install packages (libraries, tools, frameworks), manage project dependencies and share your own packages with others.
- curl: curl is a command-line tool used to send requests to URLs. It lets you interact with APIs or download content from the internet right from your terminal.
- gnupg: GnuPG (or GPG, short for Gnu Privacy Guard) is a tool for encryption and signing data and communications. It uses public-key cryptography to encrypt, decrypt, sign, and verify files or messages.
- ca-certificates: A collection of trusted root certificates used to validate HTTPS connections.
We must check if these packages are already installed on our system. Therefore we use the following commands in the terminal and see if there is any output. If there is no output the packages are not installed and we continue with the installation as described below.
# check the docker components
docker --version
docker-compose --version
#check the node and npm components
node --version
npm --version
#check if required dependencies are already installed
curl --version
gpg --version
In case some of these packages are already installed on your system you need to reduce the installation scope of the packages accordingly.
We expect to have no output after typing the above commands and go through the installation process step-by-step.
Step 1: Install node.js and npm from the standard Ubuntu resources.
Step 2:
- Prepare the system for secure downloads from Docker resources and
- install ca-certificates, curl and gnupg from standard Ubuntu resources
Step 3: Install Docker from Docker resources.
Step 4: Install docker-compose standalone from Docker resources.
Install Node.js and npm from Ubuntu Resources
Before we start with the installation we update and upgrade all packages. Node.js is available in Ubuntu’s repositories, so you can install it with the following commands.
sudo apt update
sudo apt upgrade
sudo apt install -y nodejs npm
Verify the installation:
node -v
npm -v
Prepare the System for secure Downloads from Docker
To prepare our system we ensure that ca-certificates, curl and gnupg are available on our system.
To be able to install the Docker packages from the external or not Ubuntu Docker repository apt must know where these resources are otherwise apt would install the packages from the Ubuntu repositories which would be the standard but not what we want. Therefore we must add the Docker repository to the apt tool. The complete process can be followed on the Docker Manual Pages.
When we add the Docker repository, the packages from that repository are digitally signed to ensure that these packages really come from docker. gnupg is a tool that allows our system to check these signatures against a trusted Docker GPG key. Therefore gnupg must be available on our system.
To make sure that the GPG key is available we must download the key from the Docker site. For the download we use the curl tool. Therefore curl must be available on our system.
We access the Docker site via HTTPS. Here ca-certificates comes into play. ca-certificates is a collection of trusted root certificates used to validate HTTPS connections. When downloading the Docker GPG key or accessing the Docker apt repository via HTTPS, Ubuntu checks the site’s SSL certificate against the collection of trusted root certificates. Therefore ca-certificates must be available on our system.
To check if ca-certificates is already installed we run the following command:
dpkg -l | grep ca-certificates
Note: The dpkg command stands for Debian Package Manager and is used to manage .deb packages on Debian-based systems like Ubuntu. It works at a lower level than apt, which is a higher-level package management tool that uses dpkg under the hood.
If it’s installed, you’ll see output like this:
ii ca-certificates 20230311ubuntu0.22.04.1 all Common CA..
In this case you do not need to install ca-certificates as describes below but it is highly recommended to update the collection of trusted root certificates before you continue.
sudo update-ca-certificates
We assume that we must install ca-certificates, curl and gnupg. First, we update the system package list to ensure everything is up-to-date on our system. This checks the installed packages for updates and upgrade the packages if required in one process. Then we install ca-certificates, curl and gnupg.
sudo apt update && sudo apt upgrade -y
sudo apt install -y ca-certificates curl gnupg
Add Docker GPG key:
To install the GPG keys from Docker we run the following command.
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc > /dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc
Let’s break down the full set of commands step by step:
sudo install -m 0755 -d /etc/apt/keyrings ….
- sudo runs the command with superuser (root) privileges.
- install is used for copying files and setting permissions.
note: This command is only sudo install
and not sudo apt install
. If we use apt install then this installs software packages from Ubuntu’s package repositories. Example: sudo apt install docker-ce
. It’s used to install applications.
Only install is a Unix command (part of coreutils) used to create directories, copy files, and set permissions in a single step. Example: sudo install -m 0755 -d /etc/apt/keyrings
. It’s used to prepare the system, not install software. So this command creates the /etc/apt/keyrings folder with secure permissions, which is later used to store GPG keyring files (such as Docker’s signing key).
- -m 0755 sets file permissions:
- 0755 means:
- 1st (0) is just a numerical representation
- 2nd (7) is for the Owner (root) having 1 x read (4) + 1 x write (2) + 1 x execute (1) = 7 permissions.
- 3rd (5) is for the Group (root) having 1 x read (4) + 0 x write (2) + 1 x execute (1) = 5 permissions (no write).
- 4th (5) is for the Others having 1 x read (4) + 0 x write (2) + 1 x execute (1) = 5 permissions (no write).
- -d tells install to create a directory
- /etc/apt/keyrings is the target directory where the Docker GPG key will be stored.
What it does?
- Ensures that the /etc/apt/keyrings directory exists.
- Sets the correct permissions (readable but not writable by non-root users).
- This is a security best practice to keep GPG keys safe from tampering.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc > /dev/null …
- curl a command-line tool to fetch files from a URL (we have installed it before)
- -fsSL flags to control curl behavior:
- -f (fail silently on server http errors like 404 – site or resource not found).
- -s (silent mode, no progress output).
- -S (shows error messages if -s is used).
- -L (follows redirects if the URL points elsewhere).
- https://download.docker.com/linux/ubuntu/gpg the URL for Docker’s GPG key file (the file name on the docker site is gpg).
- | (pipe) passes the downloaded data (the gpg file) to another command. In this case the data will be passed to the following sudo command.
- sudo tee /etc/apt/keyrings/docker.asc writes the key to before created directory /etc/apt/keyrings/docker.asc:
- tee writes the output to a file (here it is docker.asc) while also displaying it in the terminal.
- sudo ensures that the file is written with root permissions.
- > /dev/null redirects standard output to /dev/null to suppress unnecessary output. The tee command can also display and write at the same time, unless you silence it with > /dev/null.
note: sudo tee… runs with root permissions, so the file can be written even to protected directories such as /etc/apt/keyrings/ (we have set the permission to 0755. See above). You can also run the curl command with root permissions (sudo curl …) and then directly pass the output into a file with the -o option: sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc. This is suggested on the Docker Manaual Page but you can do it in both ways.
What it does?
- Downloads Docker’s official GPG key.
- Saves it securely in /etc/apt/keyrings/docker.asc.
- Ensures the key isn’t printed to the terminal.
sudo chmod a+r /etc/apt/keyrings/docker.asc
- sudo runs the command (in this case the chmod command) as root.
- chmod modifies file permissions.
- a+r grants read (r) permission to all users (a).
- /etc/apt/keyrings/docker.asc the file whose permissions are being modified.
What it does?
- Ensures that all users (including apt processes) can read the GPG key.
- This is necessary so that apt can verify Docker package signatures when installing updates.
Previously, GPG files were stored in /etc/apt/trusted.gpg. This has changed.
Why Is This Necessary?
- Security:
- Storing GPG keys in /etc/apt/keyrings/ instead of /etc/apt/trusted.gpg is a best practice.
- Prevents malicious modifications to package signatures.
- Package Verification:
- The GPG key allows Ubuntu’s package manager (apt) to verify that Docker packages are genuine and not tampered with.
- Future-proofing:
- Newer versions of Ubuntu prefer keys in /etc/apt/keyrings/ instead of the older /etc/apt/trusted.gpg.
Final Summary:
Command | Purpose |
---|---|
sudo install -m 0755 -d /etc/apt/keyring | Creates a secure directory for storing package keys. |
curl -fsSL … | sudo tee /etc/apt/keyrings/docker.asc > /dev/null | Downloads and saves Docker’s GPG key. |
sudo chmod a+r /etc/apt/keyrings/docker.asc | Ensures the key can be read by apt. |
Add the Docker repository:
To install the docker resources to the apt sources list we run the following command.
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Let’s break down the command step by step:
echo „deb….“
The echo command output the text between the “….“. This is the APT repository entry for Docker. Let’s analyze its components:
- deb → Indicates that this is a Debian-based software repository.
- arch=$(dpkg –print-architecture):
- dpkg –print-architecture dynamically retrieves the system architecture from your system (e.g., amd64, arm64).
- This ensures that the correct package version for your system’s architecture is used.
- signed-by=/etc/apt/keyrings/docker.asc specifies the location of the docker GPG key docker.asc (we installed it before), which is used to verify the authenticity of packages downloaded from the repository.
- https://download.docker.com/linux/ubuntu the URL of Docker’s official repository.
- $(lsb_release -cs) dynamically fetches the codename of the Ubuntu version (e.g., jammy for Ubuntu 22.04).
- This ensures that the correct repository for the current Ubuntu version is used.
- stable specifies that we are using the stable release channel of Docker.
| sudo tee /etc/apt/sources.list.d/docker.list
- The | (pipe) takes the output of echo and passes it to the tee command.
- sudo tee /etc/apt/sources.list.d/docker.list does the following:
- tee writes the output to a file (/etc/apt/sources.list.d/docker.list).
- sudo is required because writing to /etc/apt/sources.list.d/ requires root privileges.
> /dev/null
- The > /dev/null part discards the standard output of the tee command.
- This prevents unnecessary output from being displayed in the terminal.
- Without this, tee would both write to the file and display the text on the screen.
Install Docker from Docker Resources
Now, update the package list again and install Docker.
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin
This command installs the following Docker key components using the apt package manager on Ubuntu (the -y option automatically answers „yes“ to all prompts, so the install runs without asking for confirmation).
docker-ce
- Docker Community Edition
- This is the core Docker Engine, the daemon that runs containers.
- Installs the Docker server that manages images, containers, volumes, and networks.
docker-ce-cli
- Docker Command-Line Interface
- This is the docker command you use in your terminal (e.g., docker run, docker ps, etc.).
- Separates the CLI from the engine so they can be updated independently.
containerd.io
- Container runtime
- A lightweight, powerful runtime for containers, used internally by Docker.
- Handles the actual low-level execution of containers.
docker-buildx-plugin
- BuildKit-powered Docker build plugin
- Adds docker buildx functionality for advanced builds, multi-arch images, and caching strategies.
- Useful when building complex container images.
Note: In some documentation you will see that the sudo apt install… command will include also the docker-compose-plugin. The docker-compose-plugin is not required here because we are using the docker-compose stand alone packed (see below). The docker-compose-plugin is integrated into the Docker CLI and can replace the docker-compose standalone binary. But we use the standalone version because of the lightweight minimal install, the backward compatibility and the easy and independent manual version control.
It is highly recommended to omit the docker-compose-plugin from your apt install command if you plan to install the standalone version of Docker Compose binary manually as we will do later. If you have both versions installed this can cause confusion, especially if scripts assume one or the other. Also, Docker might prioritize the plugin version in newer setups which might cause conflicts in our preferred standalone Docker Compose setup. The following table illustrates the problem because the command styles differ only very little.
Type | Command Style | Notes |
---|---|---|
Plugin version |
docker compose
|
Comes as docker-compose-plugin , tied to Docker CLI
|
Standalone version |
docker-compose
| Installed separately, as an independent binary |
In case the docker-compose-plugin has been installed on your system you can remove it with the following command:
sudo apt remove docker-compose-plugin
This removes the plugin version that integrates into the docker compose command. Later when we installed the standalone version of docker compose we use the commands instead of the space with a dash like docker-compose.
Verify that Docker is installed correctly:
sudo docker --version
Enable and start the Docker service:
sudo systemctl enable docker
sudo systemctl start docker
Test Docker by running the hello-world image.
sudo docker run hello-world
This command is a quick test to verify that Docker is installed and working correctly.
sudo
- Runs the command with superuser (root) privileges.
- Required unless your user is in the docker group.
- Docker needs elevated permissions to communicate with the Docker daemon (which runs as root).
docker
- The main Docker CLI (Command-Line Interface) tool.
- Used to interact with Docker Engine to manage containers, images, networks, volumes, etc.
run
- Tells Docker to create a new container and start it based on the hello-world image you specify.
It does the following:
- Pulls the image (if it’s not already downloaded).
- Creates a new container from that image.
- Starts and runs the container.
- Outputs the result and then exits (for short-lived containers like hello-world).
hello-world
- This is the name of the Docker image.
- It’s an official image maintained by Docker, specifically designed to test Docker installations.
Install standalone docker-compose from Docker Resources
Installing the standalone docker-compose is useful when you:
- Need compatibility with legacy tools or scripts
- Want to control the exact version
- Prefer a lightweight, portable binary
The following command downloads the latest standalone docker-compose binary and saves it to a system-wide location.
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Let’s break down the command step by step:
sudo
- Runs the command with root privileges.
- Required because /usr/local/bin is a protected directory that only root can write to.
curl
- A command-line tool used to download files from the internet.
-L
- Tells curl to follow redirects.
- GitHub uses redirects for release URLs, so this flag ensures the final binary is actually downloaded.
„https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)“
URL with Substitution. This ensures the correct binary is downloaded for your system. This is the dynamic download URL for the latest docker-compose release.
- $(uname -s) returns the operating system name (e.g., Linux, Darwin).
- $(uname -m) returns the architecture (e.g., x8664, arm64).
Example Output: https://github.com/docker/compose/releases/latest/download/docker-compose-Linux-x8664
-o /usr/local/bin/docker-compose
- Tells curl to write (-o option; output) the downloaded file to /usr/local/bin/docker-compose
- This is a standard location for user-installed binaries that are globally available in the system PATH.
After you run the command above you must do this. Give execution permissions. You’ll need to make the binary executable:
sudo chmod +x /usr/local/bin/docker-compose
And then check the version to confirm it worked:
docker-compose --version
Final Verification
Check if everything is installed correctly:
docker --version
docker-compose --version
node -v
npm -v
Host-Docker-Setup for nodejs app behind nginx
The code of the nodejs app is explained in detail in the article Bootstrap Website running as nodejs app.
We have a simple website funtrails build in a one-page index.html with two sections: one section showing pictures of a Paterlini bike and one for showing pictures of a Gianni Motta bike. Each section containing an image gallery and text. The images are stored in an images directory.
├── .
├── images
├── index.html
Now we want to make this funtrails website run as a nodejs app behind a nginx reverse proxy. Both, the funtrails nodejs app and nginx should run in docker containers composed to work together. Therefore we crate the following file structure on the Host machine:
├── node
├── funtrails
│ ├─ views
│ ├── images
│ ├── index.html
├── nginx
We copy all our web-content into the views directory under funtrails. The nodejs app is build in the mainfile app.js. All Dependencies for the nodejs app are defined the file package.json.
├── node
├── funtrails
│ ├─ app.js
│ ├─ package.json
│ ├─ views
│ ├── images
│ ├── index.html
├── nginx
In the Terminal go into the node/funtrails directory. Install the dependencies.
npm install
Then we have the following structure.
funtrails
├── app.js
├── node_modules
├── package.json
├── package-lock.json
└── views
├── images
└── index.html
Go to node/funtrails. Run a test with the following command.
node app.js
nodejs funtrails demo app listening on port 8080!
Switch to your browser and hit http://localhost:8080 to see if all is working as expected. With Ctrl c in the terminal you can stop the app.
The nginx server will be started with the configuration of a nginx.conf file. The file nginx.conf will be created under nginx.
Note: This file nginx.conf is only for testing and will be changed when we add the SSL/TLS certificates. In this configuration below the nginx server will listen on port 80 and pass requests received from port 80 to the nodejs app listening on port 8080. Processing requests from port 80 is not state of the art as these connections are not encrypted. In production we need a port 443 connection with SSL/TLS for encrypted connections.
#nginx.conf
events {}
http {
#Service node-app from docker-compose.yml
upstream node_app {
server node-app:8080;
}
server {
listen 80;
location / {
proxy_pass http://node_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
events {}
- This is a required block in NGINX configuration, even if empty.
- It handles connection-related events (like concurrent connections), but you don’t need to configure it unless you have advanced use cases.
http {
- Starts the HTTP configuration block — this is where you define web servers, upstreams, headers, etc.
upstream
- Defines a group of backend servers (can be one or many).
- node_app is just a name.
- Inside: server node-app:8080; means:
- Forward traffic to the container with hostname node-app
- Use port 8080 (that’s where your nodejs app listens)
- node-app should match the docker-compose service name (node-app must be declared in your docker-compose.yml which will be explained below).
- This lets you use proxy_pass http://nodeapp later, instead of hardcoding an IP or port.
server
- Defines a virtual server (a website or domain).
- listen 80; tells NGINX to listen for HTTP (port 80) traffic.
location
- Defines a rule for requests to / (the root URL of your site).
- You could add more location blocks for /api, /images, etc. if needed.
- Inside:
- proxy_pass:
- proxy_pass http://node_app; Tells NGINX to forward requests to the backend defined in upstream nodeapp
- So: if you go to http://yourdomain.com/ NGINX proxies that to http://node-app:8080
- proxy_pass:
- proxysetheader (see table)
Header | Meaning |
---|---|
Host
| Preserves the original domain name from the client |
X-Real-IP
| The client’s real IP address |
X-Forwarded-For
| A list of all proxies the request passed through |
X-Forwarded-Proto
| Tells backend whether the request was via HTTP or HTTPS |
The $variables in nginx.conf are built-in variables that NGINX provides automatically. They are dynamically set based on the incoming HTTP request. So these variables come from the NGINX core HTTP module and you don’t need to define them or import anything. They are always available in the config.
Here’s what each one is and where it comes from:
$host
- The value of the Host header in the original HTTP request.
- Example: If the user visits http://example.com, then $host is example.com.
- Use case: Tells the backend app what domain the client used — useful for apps serving multiple domains.
$remote_addr
- The IP address of the client making the request.
- Example: If someone from IP 203.0.113.45 visits your site, this variable is set to 203.0.113.45.
- Use case: Useful for logging, rate limiting, or geolocation in the backend app.
$proxy_add_x_forwarded_for
- A composite header that appends the client’s IP to the existing X-Forwarded-For header.
- Use case: Maintains a full list of proxy hops (useful if your request goes through multiple reverse proxies).
- If X-Forwarded-For is already set (by another proxy), it appends $remote_addr to it; otherwise, it sets it to $remote_addr.
$scheme
- The protocol used by the client to connect to NGINX — either http or https.
- Example: If the user visits https://example.com, then $scheme is https.
- Use case: Lets your backend know whether the original request was secure or not.
Then we have the following structure.
├── node
├── funtrails
│ ├─ app.js
│ ├─ package.json
│ ├─ views
│ ├── images
│ ├── index.html
├── nginx
├── nginx.conf
Create Docker Image and Container for nodejs app
This is to create the Docker Image with docker build. Then we run the Container from the image with docker run to test if everything is working as expected. If everything goes well we can go ahead with the nginx configuration and then with the composition of all together with docker-compose.
The dockerization process of the nodejs app in node/funtrails directory is controlled by the Dockerfile which will be created in node/funtrails. The dockerization process has the following steps.
- Image creation
- Container creation from the image
The Container can then be started, stopped and removed using terminal commands.
Go into in node/funtrails.
To get an Overview and check the Docker status of the system.
List all images. No images on your system.
sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
List all Containers. As expected no Containers on the system.
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
List an Overview about Docker Images and Containers on your system. As expected no Containers and no Images on the system.
sudo docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B 0B
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 16 0 47.39MB 47.39MB
Create a Dockerfile in node/funtrails/Dockerfile. This file is required to build the image and run the Container.
#Dockerfile
#Image Build
#Install nodejs 12.22.9 for the Container
FROM node:12.22.9
#Set workdirectory for the Container
WORKDIR /home/node/app
#Change Owner and Group for Container Workdirectory to node
RUN chown node:node /home/node/app
#Run the Container with User node
USER node
#Copy all files from HOST Dir to the Container workdirectory
COPY --chown=node:node . .
#(After COPY) Run the command to create Image for the Container
RUN npm install
#Container Start
#Open Port 8080 when the Container starts
EXPOSE 8080
#RUN the command when the Container starts
CMD [ "node", "app.js" ]
Create a .dockerignore file in node/funtrails/.dockerignore. The hidden dockerignore file exclude only files from the Host that will be copied into the image for the Container with the command COPY . .
#.dockerignore
node_modules
Note: In case you copy files from the Host into the image directly with I.e. COPY <file.1> <file.2> then file.1 and file.2 would be copies even if they would be listed in dockerignore.
We have the following structure on the Host machine.
funtrails
├─ Dockerfile
├─ .dockerignore
├── app.js
├── node_modules
├── package.json
├── package-lock.json
└── views
├── images
└── index.html
Still be in node/funtrails.
Build the Docker image from the Dockerfile with the image name node-demo. The dot (.) at the end set the current directory on the Host machine for the docker command. This is location where docker is looking for the Dockerfile to build the image.
sudo docker build -t node-demo .
List all Docker images. 1 images just created.
sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
node-demo latest c353353f045e 22 seconds ago 944MB
Run the Container from the image with the name node-demo and give the Container the name funtrails-solo-demo.
sudo docker run -d -p 8080:8080 --name funtrails-solo-demo node-demo
List all Docker Containers with the option -a. 1 Container with the name funtrails-solo-demo running from the image node-demo.
sudo docker ps -a
Access the running app on port 8080.
sudo curl http://localhost:8080
If everything went well you get a feedback in the terminal showing the HTML code. In this case the Test was successful.
Stop the running Docker Container with the name funtrails-solo-demo.
sudo docker stop funtrails-solo-demo
List all Containers with the option -a. 1 Container from the image node-demo with the name funtrails-solo-demo is EXITED.
sudo docker ps -a
List all images. Still 1 image available.
sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
node-demo latest c353353f045e 36 hours ago 944MB
List an Overview about Docker Images and Containers on the system. 1 active Image and 1 not Active Container. Status of the Container is EXITED as we have seen above.
sudo docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 1 1 944MB 0B (0%)
Containers 1 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 18 0 47.39MB 47.39MB
To clean up your system use the following commands.
Target | Command |
---|---|
Delete exited containers |
sudo docker container prune
|
Delete unused images |
sudo docker image prune
|
Delete unused volumes |
sudo docker volume prune
|
Complete Housekeeping (attention!) |
sudo docker system prune -a
|
The full clean up (be careful).
sudo docker system prune -a
sudo docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B 0B
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
Configure docker-compose
Go back to the node directory and create a docker-compose.yml file there.
Then we have the following structure.
├── node
├── docker-compose.yml
├── funtrails
│ ├─ Dockerfile
│ ├─ .dockerignore
│ ├─ app.js
│ ├─ node_modules
│ ├─ package.json
│ ├─ package-lock.json
│ ├─ views
│ ├── images
│ ├── index.html
├── nginx
├── nginx.conf
docker-compose is a tool that helps you define and run multi-container Docker applications using a YAML file. Instead of running multiple docker run commands, you describe everything in one file and start it all with the command docker-compose up.
Create docker-compose.yml with the following content.
#docker-compose.yml
services:
funtrails:
build: ./funtrails
container_name: funtrails
networks:
- funtrails-network
nginx:
image: nginx:latest
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- funtrails
networks:
- funtrails-network
networks:
funtrails-network:
driver: bridge
services
This section defines containers that make up your app
- funtrails
- build: ./funtrails
Builds the image from the Dockerfile inside the ./funtrails directory. - container_name: funtrails
Names the container funtrails instead of a random name. - networks: funtrails-network
Connects the container to a custom user-defined network.
- build: ./funtrails
- nginx
- image: nginx:latest
Uses the official latest NGINX image. - container_name: nginx-proxy
Container will be named nginx-proxy. - ports: „80:80“
- Exposes Host port 80 to Container port 80.
- volumes:
Mounts your local nginx.conf into the container, read-only (:ro). - depends_on: funtrails
Ensures funtrails is started before nginx. - networks: funtrails-network
Both services are in the same network, so they can communicate by name.
networks
- Creates a custom bridge network named funtrails-network.
- Ensures containers can resolve each other by name (funtrails, nginx).
Note: We are using the official NGINX image directly (image: nginx:latest). This image is prebuilt and includes everything NGINX needs to run.
We don’t need to write a custom Dockerfile because we don’t want to:
- Add extra modules
- Customize the image beyond just the config
- Install additional tools
- Include SSL certs directly, etc.
Instead, we simply mount our own nginx.conf into the container using a volume. This tells Docker Use the official NGINX image, but replace its config file with mine. We would use a Dockerfile in the nginx directory if we need to build a custom NGINX image, for example to copy SSL certs directly into the image.
Example:
FROM nginx:latest
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./certs /etc/nginx/certs
But for most use cases like reverse proxying a nodejs app, just mounting your own config file is perfectly sufficient and simpler.
Note:We integrate SSL in the next chapter using free Lets Encrypt certificates.
Integrate SSL certificates – free Lets Encrypt
To integrate SSL we need to do the following steps:
- Prepare your Domain
- Install certbot
- Create Lets Encrypt SSL certificates
- Adapt your
node/docker-compose.yml
- Adapt your
node/nginx/nginx.conf
- Create a cron-Job to renew SSL certificates
prepare the domain
You must own a domain like example.com and you must have access to your DNS-servers to adapt the A-Record. Here in my example I create a subdomain funtrails.example.com and create on my DNS an A-Record for funtrails.example.com that point to the servers IP-Adress.
install certbot
To install our certificates for SSL we use a tool called certbot. We install certbot with apt
on our Linux machine.
sudo apt update
sudo apt install certbot
create Lets Encrypt SSL certificates
We create the SSL certificates with certbot.
sudo certbot certonly --standalone -d funtrails.example.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Enter email address (used for urgent renewal and security notices)
(Enter 'c' to cancel): <your-email>@funtrails.example.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.5-February-24-2025.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
Account registered.
Requesting a certificate for funtrails.example.com
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/funtrails.example.com/fullchain.pem
Key is saved at: /etc/letsencrypt/live/funtrails.example.com/privkey.pem
This certificate expires on 2025-07-25.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
* Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
* Donating to EFF: https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
To use SSL you need
- a server certificate (e.g. certificate.crt)
- a private key (e.g. private.key) and
- the CA certificate (e.g. ca.crt).
File | Description | Comment |
---|---|---|
private.key | Private secret key – keep this key strictly secret ! | Only your server knows this key |
certificate.crt | Your server certificate (proves your identity) | Issued by the CA (Let’s Encrypt) |
ca.crt / chain.crt | The certificate chain up to the root CA | So that clients trust your certificate |
certbot create these file in the following directory on your Host server.
/etc/letsencrypt/live/funtrails.example.com
sudo ls -l /etc/letsencrypt/live/funtrails.example.com
cert.pem -> ../../archive/funtrails.example.com/cert1.pem
chain.pem -> ../../archive/funtrails.example.com/chain1.pem
fullchain.pem -> ../../archive/funtrails.example.com/fullchain1.pem
privkey.pem -> ../../archive/funtrails.example.com/privkey1.pem
The translation to the standard is as follows.
File | Description |
---|---|
privkey.pem
| Your private key (= private.key) |
cert.pem
| Your server certificate (= certificate.crt) |
chain.pem
| The CA certificates (= ca.crt) |
fullchain.pem
| Server certificate + CA chain together |
adapt node/docker-compose.yml
The docker-compose.yml will be adapted as follows.
services:
funtrails:
build: ./funtrails
container_name: funtrails
networks:
- funtrails-network
nginx:
image: nginx:latest
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
depends_on:
- funtrails
networks:
- funtrails-network
networks:
funtrails-network:
driver: bridge
We create a bridge network with the name funtrails-network and both services are running in this network. This is important to reach all services by their container name(s).
The funtrails service will rebuild from the Dockerfile in ./funtrails. For the nginx service the nginx-image will be loaded from the Docker resources in the latest version. For the nginx image it is defined that the Host ports 80 and 443 will be mapped into the Container ports 80 and 443. When the Container is started we mount the Host files ./nginx/nginx.conf and the SSL certificates unter /etc/letsencrypt into the Container. Both will be loaded with read only!
...
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
...
With the dependson directive we declare that first the funtrails service must be started and then nginx.
adapt node/nginx/nginx.conf
The file will be adapted as follows.
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name funtrails.example.com;
# Redirect HTTP -> HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name funtrails.example.com;
ssl_certificate /etc/letsencrypt/live/funtrails.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/funtrails.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://funtrails:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
The events{} Block is a required block in NGINX configuration, even if empty. It handles connection-related events (like concurrent connections), but you don’t need to configure it unless you have advanced use cases. Here I configured 1024 concurrent connections.
Within the http block we have 2 virtual server-blocks. The first server-block define that server funtrails.example.com is listening to port 80 (HTTP) but all requests to this port will immediately be redirected to port 443 (HTTPS). The second server-block define that server funtrails.example.com is listening also to port 443 (HTTPS) followed by the location of the SSL certificate and SSL key on our local Host and the protocol definition.
The location-block define a rule for requests to / (the root URL of your site). You could add more location blocks i.e. for /api, /images, etc. if needed. In this config, we are skipping the upstream block and directly writing proxypass. The proxypass tells NGINX to forward requests to port 8080 of the backend service defined in docker-compose.yml. This backend service in docker-compose.yml is defined with the container_name directive which is set to funtrails.
...
services:
funtrails:
build: ./funtrails
container_name: funtrails
networks:
- funtrails-network
...
Docker Compose creates an internal Docker network funtrails-network, and all services can reach each other by their service name(s) as hostname(s). So nginx can resolve funtrails because it’s part of the same Docker network (no need for a manual upstream block).
These other $variables come from NGINX’s core HTTP module, so we don’t need to define them. They are always available in the config.
$host
- What it is: The value of the Host header in the original HTTP request.
- Example: If the user visits http://example.com, then $host is example.com.
- Use case: Tells the backend app what domain the client used — useful for apps serving multiple domains.
$remoteaddr
- What it is: The IP address of the client making the request.
- Example: If someone from IP 203.0.113.45 visits your site, this variable is set to 203.0.113.45.
- Use case: Useful for logging, rate limiting, or geolocation in the backend app.
$proxyaddxforwardedfor
- What it is: A composite header that appends the client’s IP to the existing X-Forwarded-For header.
- Use case: Maintains a full list of proxy hops (useful if your request goes through multiple reverse proxies).
- How it works: If X-Forwarded-For is already set (by another proxy), it appends $remoteaddr to it; otherwise, it sets it to $remoteaddr.
$scheme
- What it is: The protocol used by the client to connect to NGINX — either http or https.
- Example: If the user visits https://example.com, then $scheme is https.
- Use case: Lets your backend know whether the original request was secure or not.
Create a cron-Job to renew Lets Encrypt SSL certificates
Lets Encrypt SSL Certificates must be renewed after 90 days. certbot can renew your certificates. To automize the renewal you can create a cronjob on your Host machine.
Note: Sometimes cron doesn’t know where docker-compose is located (because the environment variables are missing). Therefore, it’s safer to use the full paths in crontab. You check the relevant paths as follows:
which docker-compose
/usr/bin/docker-compose
which certbot
/usr/bin/certbot
The create a cronjob in crontab of the user root (use sudo):
sudo crontab -e
0 3 * * * /usr/bin/certbot renew --quiet && /usr/bin/docker-compose restart nginx
With sudo crontab -e you create a crontab for the user root. All commands within the root crontab will be executed with root privileges.
certbot renew checks all certificates for expiration dates and automatically renews them.
docker-compose restart nginx ensures that nginx is reloaded so that it can apply the new certificates. Otherwise, nginx would still be using old certificates even though new ones are available. With the command you call docker-compose restart <service-name>. Here you specify the service name from docker-compose.yml not the container name.
Note: In case you would call crontab -e (without sudo) you would edit your own user crontab. This crontab then runs under your user, not as root and the tasks in crontab run unter this normal user. When you renew SSL certificates using certbot renew this job must write into the directories under /etc/letsencrypt/ on your machine. But these directories are owned by root. So you cannot write to the directories when the crontab runs under a normal user. One might then think that the commands in the crontab of a normal user should be executed with sudo. If a crontab job of your normal user (i.e. patrick) is running and sudo is used in the command, then sudo will attempt to prompt for the password. But there is no terminal in crontab where you can enter a password. Therefore, the command will fail (error in the log, nothing happens). Therefore it is essential here to edit the crontab of the user root with sudo crontab -e.
Finally you can check the crontab of your own or the crontab as root as follows.
crontab -l
no crontab for patrick
sudo crontab -l
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 3 * * * /usr/bin/certbot renew --quiet && /usr/bin/docker-compose restart nginx
You can check the logs for the renewal process using the following command.
sudo cat /var/log/letsencrypt/letsencrypt.log
Start the Containers with docker-compose
Navigate to the directory with docker-compose.yml. Then use the following commands.
docker-compose build
docker-compose up -d
The command docker-compose build reads the Dockerfile for each service defined in docker-compose.yml and builds the Docker image accordingly. The command docker-compose up -d run the container(s) and the network. This starts all services defined in the docker-compose.yml and links them via the defined docker network. The -d flag runs the containers in the background (detached mode).
Then you can check the status using the following commands.
docker-compose ps
docker ps
Here is an overview of the most important commands.
Command | Purpose |
---|---|
docker-compose build
| Build all images from Dockerfiles |
docker-compose up -d
| Start containers in the background |
docker-compose ps
| See status of containers |
docker-compose down
| Stop and remove all containers |
docker-compose logs -f
| Follow logs of all services |
How to manage Changes made to the application code
When we make changes to the app code i.e. in node/funtrails/app.js or in node/funtrails/Dockerfile we need to rebuild the image for the funtrails service defined in node/docker-compose.yml. In such a change scenario it is not necessary to stop the containers with docker-compose down before you rebuild the image with docker-compose build.
You can rebuild and restart only the funtrails service with the following commands.
docker-compose build funtrails
docker-compose up -d funtrails
This will:
- Rebuild the funtrails image
- Stop the old funtrails container (if running)
- Start a new container using the updated image
- Without affecting other services like nginx