<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Docker &#8211; Digitaldocblog</title>
	<atom:link href="https://digitaldocblog.com/tag/docker/feed/" rel="self" type="application/rss+xml" />
	<link>https://digitaldocblog.com</link>
	<description>Various digital documentation</description>
	<lastBuildDate>Thu, 01 Jan 2026 08:02:37 +0000</lastBuildDate>
	<language>de</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>

 
	<item>
		<title>Run Vaultwarden and Caddy on your Linux Server with docker-compose</title>
		<link>https://digitaldocblog.com/webserver/run-vaultwarden-and-caddy-on-your-linux-server-with-docker-compose-2/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 05 Sep 2025 05:45:33 +0000</pubDate>
				<category><![CDATA[Server]]></category>
		<category><![CDATA[Webserver]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=252</guid>

					<description><![CDATA[Vaultwarden is a very light, easy to use and very well documented alternative implementation of the Bitwarden Client API. It is perfect if you struggle with the complex Bitwarden installation&#8230;]]></description>
										<content:encoded><![CDATA[
<p><a href="https://github.com/dani-garcia/vaultwarden" title="Vaultwarden on GitHub">Vaultwarden</a> is a very light, easy to use and very well documented alternative implementation of the <a href="https://bitwarden.com/help/self-host-bitwarden/" title="Bitwarden Self-Hosted">Bitwarden Client API</a>. It is perfect if you struggle with the complex Bitwarden installation but want to self-host your own password management server and connect your Bitwarden Clients which are installed on your computer or mobile device. In this documentation I describe the steps to configure and run Vaultwarden on a Ubuntu Linux 22.04 using docker-compose services. In parallel you should read the <a href="https://github.com/dani-garcia/vaultwarden/wiki" title="Vaultwarden Wiki">Vaultwarden Wiki</a> to understand the complete background.</p>




<h3 class="wp-block-heading">Prepare your Server</h3>



<p>Before we start we need to prepare the server. In this step we create the environment to manage the vaultwarden instance. Login to your server using your standard User (not root).</p>




<h4 class="wp-block-heading">Basic requirements</h4>



<p>Login to your server with SSH and authenticate with keys. Never use simple password authentication. You must create a private and a public SSH key-pair on your Host Machine and copy only the public SSH key to your Remote Server. Keep your private key safe on your Host machine. Then copy your public key to your Remote Server and configure your ssh-deamon on your Remote Server. Disable password authentication and root login in your configuration on your Remote Server. How all this works is very good explained on <a href="https://linuxize.com/post/how-to-set-up-ssh-keys-on-ubuntu-1804/" title="Linuxize.com">Linuxize.com</a>.</p>




<p>You must ensure that SSL certificates for your server are installed. I use free <a href="https://letsencrypt.org/" title="Letsencrypt - Encryption for Everybody">LetsEncrypt</a> certificates and <em>certbot</em> to install and renew my certificates on the system. Therefore pls. read on my <a href="https://digitaldocblog.com/" title="Digitaldocblog - Patrick Rottländer nurdy Ressources">Digitaldocblog</a> in the Article <a href="https://digitaldocblog.com/webserver/ssl-certificates-with-lets-encrypt-and-certbot-on-a-linux-server/" title="SSL Certificates with Lets Encrypt and certbot on a Linux Server"> SSL Certificates with Lets Encrypt and certbot on a Linux Server </a>. Here you find a very detailed description of how you can do this .</p>




<p>You must ensure that docker and docker-compose is installed on your system. Therefore pls. read on my <a href="https://digitaldocblog.com/" title="Digitaldocblog - Patrick Rottländer nurdy Ressources">Digitaldocblog</a> a very detailed description of how you can do this in the Article <a href="https://digitaldocblog.com/webserver/containerize-a-nodejs-app-and-nginx-with-docker-on-ubuntu-2204/" title="Containerize a nodejs app with nginx">Containerize a nodejs app with nginx</a>. You should read the following chapters:</p>




<ul class="wp-block-list">
	<li>Prepare the System for secure Downloads from Docker</li>
	<li>Install Docker from Docker Resources</li>
	<li>Install standalone docker-compose from Docker Resources<br></li>
</ul>



<p>Make sure that your server is running behind a firewall. In my case I have a virtual server and I am responsible for server security. Therefore I install and configure a firewall on my system. </p>




<p>Before you configure the firewall be sure that your ssh service is running and on which port your ssh service is running. This is important to know because you don’t want to lock yourself out. First you check if the ssh service is running with <em> systemctl </em> and then on which port ssh is running using <em>netstat</em>. If <em>netstat</em> is not installed on your system you can do this with <em>apt</em>. </p>




<pre class="wp-block-code"><code>#control ssh status
sudo systemctl status ssh

#check ssh port
sudo netstat -tulnp | grep ssh

#check if net-tools (include netstat) are installed
which netstat

#install net-tools only in case not installed
sudo apt install net-tools
</code></pre>



<p>I install <em>ufw</em> (uncomplicated firewall) on my server and configure it to provide only SSH and HTTPS to the outside world. </p>




<pre class="wp-block-code"><code># check if ufw is installed
ufw version
which ufw

#install ufw if not installed
sudo apt install ufw

#open SSH and HTTPS
sudo ufw allow OpenSSH
sudo ufw allow 443
sudo ufw allow 80/tcp

#Default rules
sudo ufw default deny incoming
sudo ufw default allow outgoing

#Start the firewall
sudo ufw enable

#Check the firewall status
sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (OpenSSH)           ALLOW IN    Anywhere                  
443                        ALLOW IN    Anywhere                  
80/tcp                     ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))      ALLOW IN    Anywhere (v6)             
443 (v6)                   ALLOW IN    Anywhere (v6)             
80/tcp (v6)                ALLOW IN    Anywhere (v6)

</code></pre>



<h4 class="wp-block-heading">Port 443 must run TCP and UDP</h4>



<p>In the Docker environment, we will later configure Caddy as a reverse proxy for the vaultwarden service. Caddy requires both TCP and UDP on port 443. This is due to a modern protocol called HTTP/3 (QUIC).</p>




<p>Traditionally, the web (HTTP/1.1 and HTTP/2) ran exclusively over the TCP protocol. TCP is very reliable, but sometimes a bit slow when establishing a connection. Google and others subsequently developed QUIC, on which the new standard HTTP/3 is based. QUIC uses UDP instead of TCP.</p>




<p>HTTP/3 QUIC is significantly faster when establishing a connection (handshake). It handles packet loss better, which is especially important for mobile connections on smartphones, for example, when using the Bitwarden app.</p>




<p>Caddy is a very modern reverse proxy that has HTTP/3 enabled by default. The &#8222;normal&#8220; HTTPS connection (HTTP/1.1 or HTTP/2) is established over TCP port 443. Caddy offers clients (Browsers or the Bitwarden app) the option to switch to HTTP/3 via UDP port 443.</p>




<p>With vaultwarden, users often access their vaults via smartphones using LTE or Wi-Fi connections. When UDP port 443 is open, the app uses HTTP/3. This results in a more stable connection and improved vault synchronization due to reduced latency.</p>




<h4 class="wp-block-heading">Why must Port 80 be open</h4>



<p>When you follow the instructions in the Article <a href="https://digitaldocblog.com/webserver/ssl-certificates-with-lets-encrypt-and-certbot-on-a-linux-server/" title="SSL Certificates with Lets Encrypt and certbot on a Linux Server"> SSL Certificates with Lets Encrypt and certbot on a Linux Server </a> then you are using <em>certbot</em> and install your <a href="https://letsencrypt.org/" title="Letsencrypt - Encryption for Everybody">LetsEncrypt</a> certificates on your local server with the following <em>certbot</em> command :</p>




<pre class="wp-block-code"><code>sudo certbot certonly --standalone -d &lt;yourDomain&gt;
</code></pre>



<p>Here you use the method <em>standalone</em>. Whenever you renew your certificates  <em>certbot</em> starts a temporary web server to prove ownership of your domain to <a href="https://letsencrypt.org/" title="Letsencrypt - Encryption for Everybody">LetsEncrypt</a>.</p>




<p>When renewing your certificates <em>certbot</em> attempts to start a temporary web server on port 80. Therefore, port 80 must be open on your system via your firewall rules.</p>




<p>Important: If another web server (such as Nginx or Apache) is already running permanently on port 80, this step will fail unless the service is stopped. So, if a web server is running on port 80, this service must be permanently stopped.</p>




<p>After Certbot successfully starts the web server on port 80, the following happens:</p>




<ul class="wp-block-list">
	<li><strong>ACME Challenge:</strong> Certbot contacts the Let&#8217;s Encrypt servers. These provide Certbot with a random string (token).</li>
	<li><strong>Deployment:</strong> Certbot creates a small file on the spot at the URL <em>http://yourDomain/.well-known/acme-challenge/token</em></li>
	<li><strong>Verification:</strong> The Let&#8217;s Encrypt servers now attempt to access this exact URL via the public internet. If they find the file (and the correct token), it proves that you own the server under this domain.<br></li>
</ul>



<p>Once the verification is successful, the temporary standalone web server is immediately shut down and port 80 is opened. Certbot generates a new private key (if configured) and signs the new certificate. The new certificate files are stored in the directory <em>/etc/letsencrypt/archive/ </em> and the symbolic links in the directory <em>/etc/letsencrypt/live/</em> are updated.</p>




<h4 class="wp-block-heading">Create new user vaultwarden</h4>



<p>You are logged in with your standarduser. From the home directory of the standarduser you create the user vaultwarden and create the hidden directory /home/vaultwarden/.ssh. Then copy the  authorized_keys file from your .ssh directory into the new created .ssh directory of the new user vaultwarden and set the owner and permissions.</p>




<p>The new user vaultwarden should be in sudo group to perform commands under root using sudo. And the user vaultwarden should be in docker group to perform docker commands without using sudo. </p>




<pre class="wp-block-code"><code>#logged-in with standard user and create new user 
sudo adduser vaultwarden

#create hidden .ssh directory in new users home directory
sudo mkdir /home/vaultwarden/.ssh

#copy authorized_keys file to enable ssh key login for new user
cd /home/standarduser/.ssh
sudo cp authorized_keys /home/vaultwarden/.ssh

#set the owner vaultwarden and permissions
sudo chown -R vaultwarden:vaultwarden /home/vaultwarden/.ssh
sudo chmod 700 /home/vaultwarden/.ssh
sudo chmod 600 /home/vaultwarden/.ssh/authorized_keys

#check permissions
ls -al /home/vaultwarden
drwx-- vaultwarden vaultwarden 4096 May 11 14:20 .ssh

ls -l /home/vaultwarden/.ssh
-rw-- vaultwarden vaultwarden 400 May 11 14:20 authorized_keys

#add user vaultwarden to sudo- and docker group
sudo usermod -aG sudo vaultwarden
sudo usermod -aG docker vaultwarden

#check vaultwarden groups (3 groups: vaultwarden sudo docker)
sudo groups vaultwarden
vaultwarden : vaultwarden sudo docker

</code></pre>



<h4 class="wp-block-heading">Create /opt/vaultwarden directory</h4>



<p>Login with the new user vaultwarden. Stay logged in as vaultwarden for the next steps. Do not perform the next steps or the installation of the vaultwarden server with root or any other user. </p>




<p>After you are logged in with vaultwarden you create a new directory /opt/vaultwarden. This is the runtime directory of your vaultwarden application and the place from where the docker containers will be started.</p>




<pre class="wp-block-code"><code>sudo mkdir /opt/vaultwarden
sudo chmod -R 700 /opt/vaultwarden

ls -l /opt/vaultwarden
drwx--vaultwarden vaultwarden 4096 Mai 16 07:06 vaultwarden
 
</code></pre>



<p>Then change into /opt/vaultwarden. You create the /opt/vaultwarden/vw-data directory which is the host directory for the docker containers. One of these containers will be started under the container name vaultwarden. This container vaultwarden run with root privileges and write into this host directory /opt/vaultwarden/vw-data.</p>




<pre class="wp-block-code"><code>mkdir /opt/vaultwarden/vw-data

ls -l /opt/vaultwarden
drwxrwxr-x 6 vaultwarden vaultwarden 4096 Mai 15 15:54 vw-data
`
</code></pre>



<h4 class="wp-block-heading">Create /opt/vaultwarden/certs and copy your SSL certificates</h4>



<p>The container vaultwarden is the web application where you can log-in and manage your passwords. As we will se below the container will be started under the user vaultwarden in /opt/vaultwarden and run with root privileges behind a reverse proxy server. As reverse proxy I will use <a href="https://caddyserver.com" title="Caddy Server Platform">Caddy</a> which is a powerful platform. Caddy manages the requests from the outside world and forward requests via an internal docker network to the vaultwarden server. </p>




<p>Caddy must accept only HTTPS connections and use the standard directory for certificates /opt/vaultwarden/certs. Therefore we create this directory.</p>




<p>I use Letsencrypt certificates which are managed automatically by certbot and stored in the directory /etc/letsencrypt on my host server. The keys are set up the following structure: </p>




<ul class="wp-block-list">
	<li>In /etc/letsencrypt/live/&lt;domain&gt; there are only symlinks that point </li>
	<li>to the real files in /etc/letsencrypt/archive/&lt;domain&gt;. <br></li>
</ul>



<p>Copy the fullchain.pem and privkey.pem files from /etc/letsencrypt/live/&lt;domain&gt; to /opt/vaultwarden/certs and set the permissions accordingly. </p>




<pre class="wp-block-code"><code>#create certs directory
mkdir /opt/vaultwarden/certs

#change into certs directory
cd /opt/vaultwarden/certs

#copy the files
cp /etc/letsencrypt/live/&lt;domain&gt;/fullchain.pem fullchain.pem
cp /etc/letsencrypt/live/&lt;domain&gt;/privkey.pem privkey.pem

#set owner and group vaultwarden
chown vaultwarden:vaultwarden fullchain.pem
chown vaultwarden:vaultwarden privkey.pem

#set access (read write only vaultwarden)
chmod 600 privkey.pem
chmod 600 fullchain.pem

</code></pre>



<p><strong>note:</strong> The cp command will place the actual files (not the symlinks) into the directory /opt/vaultwarden/certs unless you specify a -P or -d option. So the cp command follows the symlinks and grab the actual files behind. This is important when the certificates have been renewed and new certificate files are stored behind the symlinks. </p>




<h3 class="wp-block-heading">Run Vaultwarden and Caddy as docker containers</h3>



<p>The following setup creates a secure, email-enabled Vaultwarden instance behind a Caddy reverse proxy with HTTPS and admin access, running entirely via Docker.</p>




<h4 class="wp-block-heading">Create docker-compose.yml</h4>



<p>This docker-compose.yml file sets up two services Vaultwarden and Caddy to host a self-hosted password manager with HTTPS support.</p>




<pre class="wp-block-code"><code>#docker-compose.yml

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    environment:
      DOMAIN: "&lt;yourDomain&gt;"
      SIGNUPS_ALLOWED: "false"
      SMTP_HOST: "&lt;yourSmtpServer&gt;"
      SMTP_FROM: "&lt;yourEmail&gt;"
      SMTP_FROM_NAME: "&lt;yourName&gt;"
      SMTP_USERNAME: "&lt;yourEmail&gt;"
      SMTP_PASSWORD: "&lt;yourSmtpPasswd&gt;"
      SMTP_SECURITY: "force_tls"
      SMTP_PORT: "465"
      ADMIN_TOKEN: '&lt;yourAdminToken&gt;'
    volumes:
      - ./vw-data:/data

  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    ports:
      - 443:443
      - 443:443/udp 
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy-config:/config
      - ./caddy-data:/data
      - ./certs:/certs:ro
    environment:
      DOMAIN: "&lt;yourDomain&gt;"
      LOG_FILE: "/data/access.log"
</code></pre>



<p><strong>vaultwarden service:</strong></p>




<p>Runs the Vaultwarden server (a lightweight Bitwarden-compatible backend).</p>




<ul class="wp-block-list">
	<li>Disables user signups (SIGNUPS_ALLOWED: &#8222;false&#8220;).</li>
	<li>Configures SMTP settings for sending emails (e.g. for password resets).</li>
	<li>Sets a secure admin token (using Argon2 hash) to access the /admin interface.</li>
	<li>Persists Vaultwarden data to ./vw-data on the host.<br></li>
</ul>



<p><strong>Enable admin page access:</strong></p>




<p>With the ADMIN_TOKEN set we enable login to the admin site via &lt;yourDomain&gt;/admin. First create a secure admin password. The &lt;yourAdminToken&gt; value can be created with your admin password piped into argon2. The result is a hash value that must be a little modified and then inserted as &lt;yourAdminToken&gt;. It is important that you use single quotes in docker-compose.yml when you insert &lt;yourAdminToken&gt;. </p>




<pre class="wp-block-code"><code>sudo apt install -y argon2
echo -n '&lt;yourAdminPassword&gt;' | argon2 somesalt -e

#This is the result of the argon2 hashing
$argon2i$v=19$m=4096,t=3,p=1$c29tZXNhbHQ$D...
</code></pre>



<p>Then you modify the hash by adding a $ sign in front of each $ sign in the hash. In this case we add 5 $ signs.</p>




<pre class="wp-block-code"><code>#original value
$argon2i$v=19$m=4096,t=3,p=1$c29thbHQ$D...

#modified value
$$argon2i$$v=19$$m=4096,t=3,p=1$$c29thbHQ$$D...
</code></pre>



<p>Then you put the modified value in single quotes into docker-compose.yml.</p>




<pre class="wp-block-code"><code>#docker-compose.yml

.....

ADMIN_TOKEN: '$$argon2i$$v=19$$m=4096,t=3,p=1$$c29thbHQ$$D...'

</code></pre>



<p>Now you can access the admin page with &lt;yourDomain&gt;/admin and login with your admin password. </p>




<p><strong>caddy service:</strong></p>




<p>Uses Caddy web server to reverse proxy to Vaultwarden.</p>




<ul class="wp-block-list">
	<li>Handles HTTPS using custom certificates from ./certs.</li>
	<li>Binds to port 443 for secure access.</li>
	<li>Reads its configuration from ./Caddyfile.</li>
	<li>Logs access to /data/access.log (mapped from ./caddy-data on the host).<br></li>
</ul>



<h4 class="wp-block-heading">Create Caddyfile</h4>



<p>Caddy is a modern, powerful web server that automatically handles HTTPS, reverse proxying, and more. Caddy acts as a secure HTTPS reverse proxy, forwarding external requests to the Vaultwarden Docker container running on internal port 80.</p>




<p>This Caddyfile defines how Caddy should serve and protect your vaultwarden instance over HTTPS.</p>




<pre class="wp-block-code"><code>#Caddyfile
https://&lt;domain&gt; {
  log {
    level INFO
    output file /data/access.log {
      roll_size 10MB
      roll_keep 10
    }
  }

  # Use custom certificate and key
  tls /certs/fullchain.pem /certs/privkey.pem

  # This setting may have compatibility issues with some browsers
  # (e.g., attachment downloading on Firefox). Try disabling this
  # if you encounter issues.
  encode zstd gzip

  # Admin path matcher
  @adminPath path /admin*
  
  # Basic Auth for admin access
  handle @adminPath {
    # If admin path require basic auth
    basicauth {
      superadmin &lt;passwdhash&gt;
    }

    reverse_proxy vaultwarden:80 {
      header_up X-Real-IP {remote_host}
    }
  }

  # Everything else
  reverse_proxy vaultwarden:80 {
    header_up X-Real-IP {remote_host}
  }
}
</code></pre>



<p><strong>Domain</strong></p>




<pre class="wp-block-code"><code>https://&lt;domain&gt;
</code></pre>



<p>This defines the domain name Caddy listens on (e.g. https://yourinstance.example.com).</p>




<p><strong>Logging</strong></p>




<pre class="wp-block-code"><code>log {
  level INFO
  output file /data/access.log {
    roll_size 10MB
    roll_keep 10
  }
}
</code></pre>



<p>Logs all access to a file inside the container (/data/access.log), with log rotation.</p>




<p><strong>TLS with Custom Certificates</strong></p>




<pre class="wp-block-code"><code>tls /certs/fullchain.pem /certs/privkey.pem
</code></pre>



<p>Use your own Let&#8217;s Encrypt certificates from mounted files rather than auto-generating them.</p>




<p><strong>Compression</strong></p>




<pre class="wp-block-code"><code>encode zstd gzip
</code></pre>



<p>Enables modern compression methods to improve performance, though may cause issues with attachments on some browsers.</p>




<p><strong>Admin Area Protection</strong></p>




<pre class="wp-block-code"><code>@adminPath path /admin*
</code></pre>



<p>Matches all requests to /admin paths.</p>




<pre class="wp-block-code"><code>handle @adminPath {
  basicauth {
    superadmin &lt;passwdhash&gt;
  }

  reverse_proxy vaultwarden:80 {
    header_up X-Real-IP {remote_host}
  }
}
</code></pre>



<ul class="wp-block-list">
	<li>Requires HTTP Basic Auth for access to /admin.</li>
	<li>Proxies/Forwards authenticated admin requests to the Vaultwarden container.</li>
	<li>Ensures the backend sees the original client IP address.<br></li>
</ul>



<p><strong>All Other Requests</strong></p>




<pre class="wp-block-code"><code>reverse_proxy vaultwarden:80 {
  header_up X-Real-IP {remote_host}
}
</code></pre>



<ul class="wp-block-list">
	<li>Proxies/Forwards all non-/admin traffic directly to Vaultwarden container.</li>
	<li>Ensures the backend sees the original client IP address.<br></li>
</ul>



<p><strong>Protect your admin page</strong></p>




<p>To protect your admin page we can use a HTTP Basic Auth. This mean whenever you access &lt;yourDomain&gt;/admin a login window will pop up in your browser and ask you to provide a user name and a password. We use htpasswd which is part of the apache2-utils. </p>




<pre class="wp-block-code"><code>#install apache2-utils if not available
sudo apt install apache2-utils

#Create a hash for the user admin (you can use any user-name) 
htpasswd -nB admin

New password: 
Re-type new password: 
admin:$2y$05$HZukVJWhWMrT7qMO2n65bm/5JYlt5tO...

</code></pre>



<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Option
			</th>
			<th>
				Description
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>-n</code>
			</td>
			<td>
				Displays the result only on the console instead of writing it to a file.
			</td>
		</tr>
		<tr>
			<td>
				<code>-B</code>
			</td>
			<td>
				Uses the bcrypt hash algorithm, which is supported by Caddy and is very secure.
			</td>
		</tr>
		<tr>
			<td>
				<code>admin</code>
			</td>
			<td>
				The username for basic auth access (for example admin).
			</td>
		</tr>
	</tbody>
</table>
<figcaption>htpasswd options</figcaption>
</figure>



<p>Then you insert only the hash (without admin: ….) into the Caddyfile.</p>




<pre class="wp-block-code"><code>#Caddyfile

.....

handle @adminPath 
bash
  basicauth {
    admin $2y$05$HZukVJWhWMrT7qMO2n65bm/5JYlt5tO...
  }

</code></pre>



<h4 class="wp-block-heading">Create ssl certificates with Letsencrypt and certbot</h4>



<p>You can follow the instructions in my post on <a href="https://digitaldocblog.com/webserver/ssl-certificates-with-lets-encrypt-and-certbot-on-a-linux-server/" title="SSL Certificates with Letsencrypt">digitaldocblog.com</a>. When you followed these instructions your ssl certificates are installed on your server in standalone mode. Whenever you renew your certificates certbot initiates the domain validation challenge and a temporary server will be started trying to listen on port 80. Because we run a web server this port is blocked and the web server must be stoped before we can initiate the renewal process.</p>




<p>To avoid this we should change the certbot renewal from standalone mode to webroot. Webroot is a method that leverages your existing web server to handle the domain validation challenge process without the need to stop the web server.  Certbot places a temporary file with a unique token in a specific directory within your web servers  public &#8222;webroot&#8220; directory (e.g., /var/www/html/.well-known/acme-challenge/). Let&#8217;s Encrypt then sends a request to your domain to retrieve this file. Since your web server is already running, it can serve the file without any interruption to your website&#8217;s availability.</p>




<p>In our <em>docker.compose.yml</em> we defined the backend service <em>vaultwarden</em>. This service is the backend service listening to port 80. We also defined a web server service <em>caddy</em> listening to port 443.   </p>




<p>In our <em>Caddyfile</em> we specified only a web server only for <em>https://&lt;Domain&gt;</em> working as a reverse proxy. Any request for <em>https://&lt;Domain&gt;</em> will be forwarded to the backend service <em>vaultwarden</em> listening on port 80 <em>vaultwarden:80</em>. </p>




<p>To enable webroot for certbot certificate renewal we must change the  <em>docker.compose.yml</em> file. For the <em>caddy</em> service and in the ports section we must allow port 80 and expose a webroot directory via port 80 only for serving the domain challenge validation file. Therefore we create on the local host directory <em>/opt/vaultwarden/caddy<em>acme</em></em>. In the volumes section of the caddy service we map this local directory 1:1 into the caddy container. Note: the notation with the dot like <em>./caddy<em>acme:/caddy</em>acme</em> requires that the actual <em>Caddyfile</em> is in the same directory <em>/opt/vaultwarden/Caddyfile</em> than <em>/opt/vaultwarden/caddy<em>acme</em></em>. </p>




<pre class="wp-block-code"><code>#docker-compose.yml

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    environment:
      DOMAIN: "https://bitwarden.rottlaender.eu"
      SIGNUPS_ALLOWED: "false"
      SMTP_HOST: "smtp.strato.de"
      SMTP_FROM: "bw@bitwarden.rottlaender.eu"
      SMTP_FROM_NAME: "Vaultwarden"
      SMTP_USERNAME: "bw@bitwarden.rottlaender.eu"
      SMTP_PASSWORD: "ZZqrmQEvydkxY3E2u8Y.KEbKNwgkTV"
      SMTP_SECURITY: "force_tls"
      SMTP_PORT: "465"
      ADMIN_TOKEN: '$$argon2i$$v=19$$m=4096,t=3,p=1$$c29tZXNhbHQ$$D/yu7vPhcpPz8Kk7G/R34YSO+NgtLzai0wVGSGL0RDE'
    volumes:
      - ./vw-data:/data

  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    ports:
      - 80:80
      - 80:80/udp
      - 443:443
      - 443:443/udp
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy-config:/config
      - ./caddy-data:/data
      - ./certs:/certs:ro
      - ./caddy_acme:/caddy_acme
    environment:
      DOMAIN: "https://bitwarden.rottlaender.eu"
      LOG_FILE: "/data/access.log"
</code></pre>



<p>With this configuration we configure <em>caddy</em> to listen on port 80 and on port 443. In the current ufo firewall configuration we only allow the ports 22 and 443.</p>




<pre class="wp-block-code"><code>#Check the firewall status
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (OpenSSH)           ALLOW IN    Anywhere                  
443                        ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))      ALLOW IN    Anywhere (v6)             
443 (v6)                   ALLOW IN    Anywhere (v6)
</code></pre>



<p>We must open port 80 to enable the certbot webroot certificate renewal.</p>




<pre class="wp-block-code"><code># open port 80
sudo ufw allow 80/tcp

# Check the firewall status
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (OpenSSH)           ALLOW IN    Anywhere                  
443                        ALLOW IN    Anywhere                  
80/tcp                     ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))      ALLOW IN    Anywhere (v6)             
443 (v6)                   ALLOW IN    Anywhere (v6)             
80/tcp (v6)                ALLOW IN    Anywhere (v6)
</code></pre>



<p>Then we must also change the <em>Caddyfile</em>. We must insert a <em>http://..</em> block to enable <em>caddy</em> to serve the ACME challenge.</p>




<pre class="wp-block-code"><code>#Caddyfile

http://bitwarden.rottlaender.eu {
    # Serve ACME Challenge at Port 80
    handle_path /.well-known/acme-challenge/* {
        root * /caddy_acme
        file_server
    }

    # Any other request redirect to HTTPS
    @notACME not path /.well-known/acme-challenge/*
    handle @notACME {
        redir https://bitwarden.rottlaender.eu{uri} permanent
    }
}

https://bitwarden.rottlaender.eu {
  log {
    level INFO
    output file /data/access.log {
      roll_size 10MB
      roll_keep 10
    }
  }

  # Use custom certificate and key
  tls /certs/fullchain.pem /certs/privkey.pem

  # ACME Challenge for Certbot Webroot
  handle_path /.well-known/acme-challenge/* {
    root * /caddy_acme
    file_server
  }

  # This setting may have compatibility issues with some browsers (e.g., attachment downloading on Firefox). Try disabling this if you encounter issues.
  encode zstd gzip

  # Admin path matcher
  @adminPath path /admin*
  
  # Basic Auth for admin access
  handle @adminPath {
    # If admin path require basic auth
    basicauth {
      superadmin $2y$05$HZukVJWhWMrT7qMOIenLkuf2n65bm/6n260TPXKb4Wn825JYlt5tO  # Password Hash
    }

    reverse_proxy vaultwarden:80 {
      header_up X-Real-IP {remote_host}
    }
  }

  # Everything else
  reverse_proxy vaultwarden:80 {
    header_up X-Real-IP {remote_host}
  }
}
</code></pre>



<p>Then we must also change the <em>/etc/letsencrypt/renewal/bitwarden.rottlaender.eu.conf</em> to configure certbot to use webroot instead of standalone. Here change the <em>authenticator</em> directive to <em>webroot</em> and we add the <em>webroot<em>path</em></em>.  Everything else keep the same. </p>




<pre class="wp-block-code"><code># renew_before_expiry = 30 days
version = 1.21.0
archive_dir = /etc/letsencrypt/archive/bitwarden.rottlaender.eu
cert = /etc/letsencrypt/live/bitwarden.rottlaender.eu/cert.pem
privkey = /etc/letsencrypt/live/bitwarden.rottlaender.eu/privkey.pem
chain = /etc/letsencrypt/live/bitwarden.rottlaender.eu/chain.pem
fullchain = /etc/letsencrypt/live/bitwarden.rottlaender.eu/fullchain.pem

# Options used in the renewal process
[renewalparams]
account = 7844ed1ad487ff139e3adaa80aa7bbab
authenticator = webroot
webroot_path = /opt/vaultwarden/caddy_acme
server = https://acme-v02.api.letsencrypt.org/directory
renew_hook = /usr/local/bin/sync-certs.sh

</code></pre>



<p>And finally we must change the entry in the <em>crontab</em>. Before the <em>depoly-hook</em> we change the code to <em> &#8211;webroot -w /opt/vaultwarden/caddy<em>acme</em></em>. This renewal will be executed daily at 03:30h. In case the certificates are still valid (no renewal required) then the <em>deploy-hook</em> will be skipped. Only in case a renewal must be executed the script <em> /usr/local/bin/sync-certs.sh</em> will be executed.</p>




<pre class="wp-block-code"><code>#crontab

30 3 * * * sh -c '/usr/bin/certbot renew --webroot -w /opt/vaultwarden/caddy_acme --deploy-hook /usr/local/bin/sync-certs.sh &gt;&gt; /var/log/certbot-renew.log 2&gt;&amp;1'
</code></pre>



<h4 class="wp-block-heading">Create sync-certs.sh script and root crontab</h4>



<p>This script copies renewed Let&#8217;s Encrypt certificates from the standard location to a custom destination /opt/vaultwarden/certs, sets strict permissions, assigns correct ownership, and restarts the Caddy container to apply the new certificates. At the end it reloads the Caddy container (via docker-compose restart) to apply new certs.</p>




<pre class="wp-block-code"><code>#!/bin/bash

# Variables
DOMAIN="&lt;Domain&gt;"
SRC="/etc/letsencrypt/live/$DOMAIN"
DEST="/opt/vaultwarden/certs"

# Check Source Directory
if [ ! -d "$SRC" ]; then
    echo "Certificate Path $SRC not found"
    exit 1
fi

# Check Destination Directory
if [ ! -d "$DEST" ]; then
    mkdir -p "$DEST"
    chown vaultwarden:vaultwarden "$DEST"
    chmod 700 "$DEST"
    echo "Target Path $DEST created"
fi

# Copy files (overwrite)
cp "$SRC/fullchain.pem" "$DEST/fullchain.pem"
cp "$SRC/privkey.pem" "$DEST/privkey.pem"

# set owner:group vaultwarden
chown vaultwarden:vaultwarden "$DEST/fullchain.pem"
chown vaultwarden:vaultwarden "$DEST/privkey.pem"

# set access (read write only vaultwarden)
chmod 600 "$DEST/privkey.pem"
chmod 600 "$DEST/fullchain.pem"

echo "[sync-certs] Certificates for $DOMAIN synced"

# successful sync of certificates – caddy re-start
echo "[sync-certs] re-start caddy ..."
cd /opt/vaultwarden
/usr/local/bin/docker-compose restart caddy

echo "[sync-certs] caddy reloaded new certificates"
</code></pre>



<p><strong>Check Source Directory</strong></p>




<pre class="wp-block-code"><code>if [ ! -d "$SRC" ]; then
    echo "Certificate Path $SRC not found"
    exit 1
fi
</code></pre>



<ul class="wp-block-list">
	<li>What it checks: Whether the Let&#8217;s Encrypt certificate source directory for the domain exists.</li>
	<li>Why it&#8217;s needed:<br><ul>
			<li>Let’s Encrypt stores certificates as symlinks in /etc/letsencrypt/live/&lt;domain&gt;.</li>
			<li>If this folder doesn&#8217;t exist, the script stops immediately to avoid copying from a missing or invalid source.</li>
		</ul></li>
	<li>Fail-safe: Prevents copying non-existent files, which would cause later commands to fail.<br></li>
</ul>



<p><strong>Check and Create Destination Directory</strong></p>




<pre class="wp-block-code"><code>if [ ! -d "$DEST" ]; then
    mkdir -p "$DEST"
    chown vaultwarden:vaultwarden "$DEST"
    chmod 700 "$DEST"
    echo "Target Path $DEST created"
fi
</code></pre>



<ul class="wp-block-list">
	<li>What it checks: Whether the destination directory for the copied certificates exists.</li>
	<li>If not:<br><ul>
			<li>It creates the directory (mkdir -p ensures parent paths are created if missing).</li>
			<li>Sets secure permissions:<br><ul>
					<li>Owner: vaultwarden</li>
					<li>Permissions: 700 – only the vaultwarden user can access the directory.</li>
				</ul></li>
		</ul></li>
	<li>Why this matters:<br><ul>
			<li>The vaultwarden container or process needs access to the certificates.</li>
			<li>These permissions ensure only vaultwarden can read the certs, improving security.<br></li>
		</ul></li>
</ul>



<h4 class="wp-block-heading">Start, Stop and Check containers</h4>



<p>Here are the most important docker-compose commands. Run these commands when you are in the directory where the docker-compose.yml file is and be sure that the logged in user is in the docker group (otherwise these commands work with sudo).  </p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Command
			</th>
			<th>
				Description
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>docker-compose up -d</code>
			</td>
			<td>
				Start all services defined in <code>docker-compose.yml</code> in detached mode (background).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose down</code>
			</td>
			<td>
				Stop and remove all services and associated networks/volumes (defined in the file).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose restart</code>
			</td>
			<td>
				Restart all services.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose stop</code>
			</td>
			<td>
				Stop all running services (without removing them).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose start</code>
			</td>
			<td>
				Start services that were previously stopped.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose logs</code>
			</td>
			<td>
				Show logs from all services.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose logs -f</code>
			</td>
			<td>
				Tail (follow) logs in real time.
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Docker-compose commands to start, stop and check</figcaption>
</figure>



<p>Here art the most important docker commands to check the status of containers. </p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Command
			</th>
			<th>
				Description
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>docker ps</code>
			</td>
			<td>
				List <strong>running</strong> containers.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker ps -a</code>
			</td>
			<td>
				List <strong>all</strong> containers (running + stopped).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker logs &lt;container-name&gt;</code>
			</td>
			<td>
				Show logs of a specific container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker logs -f &lt;container-name&gt;</code>
			</td>
			<td>
				Tail logs in real time.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker inspect &lt;container-name&gt;</code>
			</td>
			<td>
				Show detailed info about a container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker top &lt;container-name&gt;</code>
			</td>
			<td>
				Show running processes inside the container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker exec -it &lt;container-name&gt; /bin/sh</code>
			</td>
			<td>
				Start a shell session in the container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker stats</code>
			</td>
			<td>
				Live resource usage (CPU, RAM, etc.) of containers.
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Docker commands to check containers</figcaption>
</figure>



<h4 class="wp-block-heading">Backup the vaultwarden data</h4>



<p>To backup your database you must login to your remote server and navigate to the directory <em>/opt/vaulwarden/vw-data</em>. </p>




<pre class="wp-block-code"><code>cd /opt/vaultwarden/vw-data
ls -l /opt/vaultwarden/vw-data

drwxr-xr-x 2 root root    4096 Mai 14 07:29 attachments
-rw-r--r-- 1 root root 1437696 Mai 29 10:03 db_20250529_080308.sqlite3
-rw-r--r-- 1 root root 1437696 Mai 29 10:04 db_20250529_080448.sqlite3
-rw-r--r-- 1 root root 1445888 Aug  9 08:31 db_20250809_063113.sqlite3
-rw-r--r-- 1 root root 1552384 Aug 10 07:26 db.sqlite3
-rw-r--r-- 1 root root   32768 Sep  3 08:27 db.sqlite3-shm
-rw-r--r-- 1 root root  135992 Sep  3 08:27 db.sqlite3-wal
drwxr-xr-x 2 root root   16384 Sep  3 07:57 icon_cache
-rw-r--r-- 1 root root    1675 Mai 14 07:29 rsa_key.pem
drwxr-xr-x 2 root root    4096 Mai 14 07:29 sends
drwxr-xr-x 2 root root    4096 Mai 14 07:29 tmp

</code></pre>



<p>Before we run the backup here are some background infos.</p>




<p><strong>Note:</strong> Pls. Be aware that you are currently navigating within your <em>remote server</em> not within the container vaultwarden. Under the service <em>vaultwarden</em> and <em>volumes</em> we defined in <em>docker-compose.yml</em> that the directory <em>./vw-data</em> should be mounted into the container under the directory of <em>/data</em>.  This mean that these directories are synchronized and you can check this by navigating within the container as follows.</p>




<p>To navigate within the container <em>vaultwarden</em> you can run the following commands. </p>




<pre class="wp-block-code"><code>#list your containers home directory
docker exec vaultwarden ls -l /

#list the data directory within your container
docker exec vaultwarden ls -l /data
</code></pre>



<p>With <em> docker exec vaultwarden ls -l /</em> you list your container root directory. Here you find among others the file <em>vaultwarden</em> which is basically the main program.  With <em>docker exec vaultwarden ls -l /data</em> you list the files in <em>/data</em> on container side. You see exact the same files than in <em>./vw-data</em> on your remote server.</p>




<p>All data of your vaultwarden service are stored in <em>/data</em> on container side. Here you have the database <em> db.sqlite3</em> and various directories and other files. </p>




<p>To backup these data you run the program <em>vaultwarden</em> with the option <em>backup</em>. </p>




<pre class="wp-block-code"><code>#interactive mode
docker exec -it vaultwarden /vaultwarden backup

#standard mode
docker exec vaultwarden /vaultwarden backup
Backup to 'data/db_20250905_045937.sqlite3' was successful

</code></pre>



<p>The option <em>exec -it</em> is not necessary. It runs the command in interactive mode within the terminal. </p>




<p>This script:</p>




<ol class="wp-block-list">
	<li>Reads the database location and configurations on container side in <em>/data</em>.</li>
	<li>Creates a backup of the SQLite database db.sqlite3, attachments, icons, etc.</li>
	<li>Compresses everything as a .tar.gz archive.</li>
	<li>Stores the archive in the <em>/data</em> directory on container side. <br></li>
</ol>



<p>After executing this command the backup will be also be present in <em>./vw-data</em> on your remote server. </p>




<p>Then you can download the backup from your remote server to your local computer. <strong>Note:</strong> Your local computer is in my case a Mac where I sit in front of and from where I connect to my remote server via <em>ssh</em>.</p>




<p>To download the backup you navigate on your local computer to the directory where you want to store your backup files. Then you run the <em>scp</em> command with the <em>.</em> at the end and download the backup files from your remote server into the current directory which should be your backup directory on your local computer. </p>




<pre class="wp-block-code"><code>#on your local computer
cd /Users/patrick/Software/docker/vaultwarden/backup

scp user@remote-host:/opt/vaultwarden/vw-data/&lt;backupfile&gt; .

</code></pre>



]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Containerize a nodejs app and nginx with docker on Ubuntu 22.04</title>
		<link>https://digitaldocblog.com/webserver/containerize-a-nodejs-app-and-nginx-with-docker-on-ubuntu-2204/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 03 May 2025 15:56:04 +0000</pubDate>
				<category><![CDATA[Server]]></category>
		<category><![CDATA[Web-Development]]></category>
		<category><![CDATA[Webserver]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[NginX]]></category>
		<category><![CDATA[Node.js]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=232</guid>

					<description><![CDATA[What we need: We must check if these packages are already installed on our system. Therefore we use the following commands in the terminal and see if there is any&#8230;]]></description>
										<content:encoded><![CDATA[
<p>What we need:</p>



<ol class="wp-block-list">
<li><strong>docker:</strong> Docker is a tool for building, running, and managing containers. A container is a lightweight, isolated environment that packages an application and all its dependencies (Docker application).</li>



<li><strong>docker-compose:</strong> Docker Compose is a tool for defining and running multi-container Docker applications using a single file called docker-compose.yml. </li>



<li><strong>node.js:</strong> Node.js is a JavaScript runtime that lets you run JavaScript outside the browser, typically on the server. It’s built on Chrome’s V8 engine, and it’s great for building fast, scalable network applications, like APIs or web servers.</li>



<li><strong>npm:</strong> npm stands for Node Package Manager. It&#8217;s a tool that comes with Node.js and is used to Install packages (libraries, tools, frameworks), manage project dependencies and share your own packages with others.</li>



<li><strong>curl:</strong> curl is a command-line tool used to send requests to URLs. It lets you interact with APIs or download content from the internet right from your terminal.</li>



<li><strong>gnupg:</strong> GnuPG (or GPG, short for Gnu Privacy Guard) is a tool for encryption and signing data and communications. It uses public-key cryptography to encrypt, decrypt, sign, and verify files or messages.</li>



<li> <strong>ca-certificates:</strong> A collection of trusted root certificates used to validate HTTPS connections.<br></li>
</ol>



<p>We must check if these packages are already installed on our system. Therefore we use the following commands in the terminal and see if there is any output. If there is no output the packages are not installed and we continue with the installation as described below. </p>



<pre class="wp-block-code"><code># check the docker components
docker --version
docker-compose --version

#check the node and npm components
node --version
npm --version

#check if required dependencies are already installed
curl --version
gpg --version
</code></pre>



<p>In case some of these packages are already installed on your system you need to reduce the installation scope of the packages accordingly. </p>



<p>We expect to have no output after typing the above commands and go through the installation process step-by-step. </p>



<p><strong>Step 1:</strong> Install node.js and npm from the standard Ubuntu resources. </p>



<p><strong>Step 2:</strong> </p>



<ul class="wp-block-list">
<li>Prepare the system for secure downloads from Docker resources and </li>



<li>install ca-certificates, curl and gnupg from standard Ubuntu resources</li>
</ul>



<p><strong>Step 3:</strong> Install Docker from Docker resources.</p>



<p><strong>Step 4:</strong> Install docker-compose standalone from Docker resources. </p>



<h3 class="wp-block-heading">Install Node.js and npm from Ubuntu Resources</h3>



<p>Before we start with the installation we update and upgrade all packages. Node.js is available in Ubuntu’s repositories, so you can install it with the following commands.</p>



<pre class="wp-block-code"><code>sudo apt update
sudo apt upgrade

sudo apt install -y nodejs npm
</code></pre>



<p>Verify the installation:</p>



<pre class="wp-block-code"><code>node -v
npm -v
</code></pre>



<h3 class="wp-block-heading">Prepare the System for secure Downloads from Docker</h3>



<p>To prepare our system we ensure that <em>ca-certificates, curl and gnupg</em> are available on our system.</p>



<p>To be able to install the Docker packages from the <em>external</em> or <em>not Ubuntu</em> Docker repository  <em>apt</em> must know where these resources are otherwise <em>apt</em> would install the packages from the Ubuntu repositories which would be the standard but not what we want. Therefore we must add the Docker repository to the <em>apt</em> tool. The complete process can be followed on the <a href="https://docs.docker.com/engine/install/ubuntu/" title="Install Docker Ubuntu">Docker Manual Pages</a>. </p>



<p>When we add the Docker repository, the packages from that repository are digitally signed to ensure that these packages really come from docker. <em>gnupg</em> is a tool that allows our system to check these signatures against a trusted Docker GPG key. Therefore <em>gnupg</em> must be available on our system. </p>



<p>To make sure that the GPG key is available we must download the key from the Docker site. For the download we use the <em>curl</em> tool. Therefore <em>curl</em> must be available on our system.</p>



<p>We access the Docker site via HTTPS. Here <em>ca-certificates</em> comes into play. <em>ca-certificates</em> is a collection of trusted root certificates used to validate HTTPS connections. When downloading the Docker GPG key or accessing the Docker <em>apt</em> repository via HTTPS, Ubuntu checks the site’s SSL certificate against the collection of trusted root certificates. Therefore <em> ca-certificates</em> must be available on our system.</p>



<p>To check if <em> ca-certificates</em> is already installed  we run the following command:</p>



<pre class="wp-block-code"><code>dpkg -l | grep ca-certificates
</code></pre>



<p><strong>Note:</strong> The <em>dpkg</em> command stands for Debian Package Manager and is used to manage <em>.deb</em> packages on Debian-based systems like Ubuntu. It works at a lower level than <em>apt</em>, which is a higher-level package management tool that uses <em>dpkg</em> under the hood.</p>



<p>If it’s installed, you’ll see output like this:</p>



<pre class="wp-block-code"><code>ii ca-certificates 20230311ubuntu0.22.04.1 all Common CA..
</code></pre>



<p>In this case you do not need to install  <em>ca-certificates</em>  as describes below  but it is highly recommended to update the collection of trusted root certificates before you continue.</p>



<pre class="wp-block-code"><code>sudo update-ca-certificates
</code></pre>



<p>We assume that we must install <em>ca-certificates, curl and gnupg</em>. First, we update the system package list to ensure everything is up-to-date on our system. This checks the installed packages for updates and upgrade the packages if required in one process. Then we install ca-certificates, curl and gnupg.</p>



<pre class="wp-block-code"><code>sudo apt update &amp;&amp; sudo apt upgrade -y

sudo apt install -y ca-certificates curl gnupg
</code></pre>



<p><strong>Add Docker GPG key:</strong></p>



<p>To install the GPG keys from Docker we run the following command.</p>



<pre class="wp-block-code"><code>sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc &gt; /dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>



<p>Let&#8217;s break down the full set of commands step by step:</p>



<p><strong>sudo install -m 0755 -d /etc/apt/keyrings ….</strong></p>



<ul class="wp-block-list">
<li><em>sudo</em> runs the command with superuser (root) privileges.  </li>



<li><em>install</em> is used for copying files and setting permissions.</li>
</ul>



<p><strong>note:</strong> This command is only <code>sudo install</code> and not <code>sudo apt install</code> . If we use <em>apt install</em> then this installs software packages from Ubuntu&#8217;s package repositories. Example: <code>sudo apt install docker-ce</code>. It&#8217;s used to install applications.</p>



<p>Only <em>install</em> is a Unix command (part of coreutils) used to create directories, copy files, and set permissions in a single step. Example: <code>sudo install -m 0755 -d /etc/apt/keyrings</code>. It&#8217;s used to prepare the system, not install software. So this command creates the <em>/etc/apt/keyrings</em> folder with secure permissions, which is later used to store GPG keyring files (such as Docker’s signing key).</p>



<ul class="wp-block-list">
<li>-m 0755 sets file permissions:<br>
<ul class="wp-block-list">
<li><em>0755</em> means:</li>



<li>1st (0) is just a numerical representation</li>



<li>2nd (7) is for the Owner (root) having <em>1 x read (4) + 1 x write (2) + 1 x execute (1) = 7</em> permissions.</li>



<li>3rd (5) is for the Group (root) having <em>1 x read (4) + 0 x write (2) + 1 x execute (1) = 5</em> permissions (no write).</li>



<li>4th (5) is for the Others having <em>1 x read (4) + 0 x write (2) + 1 x execute (1) = 5</em> permissions (no write).</li>
</ul>
</li>



<li>-d tells install to create a directory</li>



<li><em>/etc/apt/keyrings</em> is the target directory where the Docker GPG key will be stored.</li>
</ul>



<p>What it does?</p>



<ul class="wp-block-list">
<li>Ensures that the /etc/apt/keyrings directory exists.</li>



<li>Sets the correct permissions (readable but not writable by non-root users).</li>



<li>This is a <em>security best practice</em> to keep GPG keys safe from tampering.<br></li>
</ul>



<p><strong> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc &gt; /dev/null …</strong></p>



<ul class="wp-block-list">
<li><em>curl</em> a command-line tool to fetch files from a URL (we have installed it before)</li>



<li>-fsSL flags to control <em>curl</em> behavior:<br>
<ul class="wp-block-list">
<li>-f (fail silently on server http errors like 404 &#8211; site or resource not found).</li>



<li>-s (silent mode, no progress output).</li>



<li>-S (shows error messages if -s is used).</li>



<li>-L (follows redirects if the URL points elsewhere).</li>
</ul>
</li>



<li><em>https://download.docker.com/linux/ubuntu/gpg</em> the URL for Docker’s GPG key file (the file name on the docker site is <em>gpg</em>).</li>



<li>| (pipe) passes the downloaded data (the <em>gpg</em> file) to another command. In this case the data will be passed to the following <em>sudo</em> command.</li>



<li><em>sudo tee /etc/apt/keyrings/docker.asc</em> writes the key to before created directory <em>/etc/apt/keyrings/docker.asc</em>:<br>
<ul class="wp-block-list">
<li><em>tee</em> writes the output to a file (here it is <em>docker.asc</em>) while also displaying it in the terminal.</li>



<li><em>sudo</em> ensures that the file is written with root permissions.</li>
</ul>
</li>



<li>> /dev/null redirects standard output to <em>/dev/null</em> to suppress unnecessary output. The <em>tee</em> command can also display and write at the same time, unless you silence it with > /dev/null.<br></li>
</ul>



<p><strong>note:</strong> <em>sudo tee…</em> runs with root permissions, so the file can be written even to protected directories such as <em>/etc/apt/keyrings/</em> (we have set the permission to 0755. See above). You can also run the <em>curl</em> command with root permissions (<em>sudo curl …</em>) and then directly pass the output into a file with the <em>-o</em> option: <em>sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc</em>. This is suggested on the <a href="https://docs.docker.com/engine/install/ubuntu/" title="Bocker on Ubuntu">Docker Manaual Page</a> but you can do it in both ways.</p>



<p>What it does?</p>



<ul class="wp-block-list">
<li>Downloads Docker&#8217;s official GPG key.</li>



<li>Saves it securely in /etc/apt/keyrings/docker.asc.</li>



<li>Ensures the key isn’t printed to the terminal.<br></li>
</ul>



<p><strong> sudo chmod a+r /etc/apt/keyrings/docker.asc</strong></p>



<ul class="wp-block-list">
<li><em>sudo</em> runs the command (in this case the <em>chmod</em> command) as root.</li>



<li><em>chmod</em> modifies file permissions.<br>
<ul class="wp-block-list">
<li><em>a+r</em> grants <em>read (r) permission</em> to <em>all users (a)</em>.</li>
</ul>
</li>



<li><em>/etc/apt/keyrings/docker.asc</em> the file whose permissions are being modified.</li>
</ul>



<p>What it does?</p>



<ul class="wp-block-list">
<li>Ensures that all users (including apt processes) can read the GPG key.</li>



<li>This is necessary so that <em>apt</em> can verify Docker package signatures when installing updates.<br></li>
</ul>



<p>Previously, GPG files were stored in <em>/etc/apt/trusted.gpg</em>. This has changed.</p>



<p>Why Is This Necessary?</p>



<ol class="wp-block-list">
<li>Security:  </li>
</ol>



<ul class="wp-block-list">
<li>Storing GPG keys in <em>/etc/apt/keyrings/</em> instead of <em>/etc/apt/trusted.gpg</em> is a best practice.</li>



<li>Prevents malicious modifications to package signatures.</li>
</ul>



<ol class="wp-block-list">
<li>Package Verification:</li>
</ol>



<ul class="wp-block-list">
<li>The GPG key allows Ubuntu’s package manager (apt) to verify that Docker packages are genuine and not tampered with.</li>
</ul>



<ol class="wp-block-list">
<li>Future-proofing:</li>
</ol>



<ul class="wp-block-list">
<li>Newer versions of Ubuntu prefer keys in <em>/etc/apt/keyrings/</em> instead of the older <em>/etc/apt/trusted.gpg</em>.<br></li>
</ul>



<p>Final Summary:</p>



<figure class="wp-block-table"><table><thead><tr><th>
				Command
			</th><th>
				Purpose
			</th></tr></thead><tbody><tr><td>
				sudo install -m 0755 -d /etc/apt/keyring
			</td><td>
				Creates a secure directory for storing package keys.
			</td></tr><tr><td>
				curl -fsSL &#8230; | sudo tee /etc/apt/keyrings/docker.asc &gt; /dev/null
			</td><td>
				Downloads and saves Docker&#8217;s GPG key.
			</td></tr><tr><td>
				sudo chmod a+r /etc/apt/keyrings/docker.asc
			</td><td>
				Ensures the key can be read by apt.
			</td></tr></tbody></table><figcaption class="wp-element-caption">Add Docker GPG key</figcaption></figure>



<p><strong>Add the Docker repository:</strong></p>



<p>To install the docker resources to the <em>apt</em> sources list we run the following command. </p>



<pre class="wp-block-code"><code>echo "deb &#91;arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>



<p>Let&#8217;s break down the command step by step:</p>



<p><strong> echo &#8222;deb….“</strong></p>



<p>The <em>echo</em> command output the text between the <em>“….“</em>. This is the APT repository entry for Docker. Let&#8217;s analyze its components:</p>



<ul class="wp-block-list">
<li>deb → Indicates that this is a Debian-based software repository.</li>



<li>arch=$(dpkg &#8211;print-architecture):<br>
<ul class="wp-block-list">
<li><em>dpkg &#8211;print-architecture</em> dynamically retrieves the system architecture from your system (e.g., amd64, arm64).</li>



<li>This ensures that the correct package version for your system&#8217;s architecture is used.</li>
</ul>
</li>



<li><em>signed-by=/etc/apt/keyrings/docker.asc</em> specifies the location of the docker GPG key <em>docker.asc</em> (we installed it before), which is used to verify the authenticity of packages downloaded from the repository.</li>



<li><em>https://download.docker.com/linux/ubuntu</em> the URL of Docker’s official repository.</li>



<li><em>$(lsb_release -cs)</em> dynamically fetches the codename of the Ubuntu version (e.g., jammy for Ubuntu 22.04).<br>
<ul class="wp-block-list">
<li>This ensures that the correct repository for the current Ubuntu version is used.</li>
</ul>
</li>



<li><em>stable</em> specifies that we are using the stable release channel of Docker.</li>
</ul>



<p><strong> | sudo tee /etc/apt/sources.list.d/docker.list</strong></p>



<ul class="wp-block-list">
<li>The | (pipe) takes the output of echo and passes it to the <em>tee</em> command.</li>



<li><em>sudo tee /etc/apt/sources.list.d/docker.list</em> does the following:<br>
<ul class="wp-block-list">
<li>tee writes the output to a file (<em>/etc/apt/sources.list.d/docker.list</em>).</li>



<li>sudo is required because writing to <em>/etc/apt/sources.list.d/</em> requires root privileges.</li>
</ul>
</li>
</ul>



<p><strong>&gt; /dev/null</strong></p>



<ul class="wp-block-list">
<li>The <em>> /dev/null</em> part discards the standard output of the tee command.<br>
<ul class="wp-block-list">
<li>This prevents unnecessary output from being displayed in the terminal.</li>



<li>Without this, tee would both write to the file and display the text on the screen.<br></li>
</ul>
</li>
</ul>



<h3 class="wp-block-heading">Install Docker from Docker Resources</h3>



<p>Now, update the package list again and install Docker.</p>



<pre class="wp-block-code"><code>sudo apt update

sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin
</code></pre>



<p>This command installs the following Docker key components using the <em>apt</em> package manager on Ubuntu (the <em>-y</em> option automatically answers &#8222;yes&#8220; to all prompts, so the install runs without asking for confirmation).</p>



<p><strong>docker-ce</strong></p>



<ul class="wp-block-list">
<li>Docker Community Edition</li>



<li>This is the core Docker Engine, the daemon that runs containers.</li>



<li>Installs the Docker server that manages images, containers, volumes, and networks.</li>
</ul>



<p><strong>docker-ce-cli</strong></p>



<ul class="wp-block-list">
<li>Docker Command-Line Interface</li>



<li>This is the docker command you use in your terminal (e.g., docker run, docker ps, etc.).</li>



<li>Separates the CLI from the engine so they can be updated independently.</li>
</ul>



<p><strong>containerd.io</strong></p>



<ul class="wp-block-list">
<li>Container runtime</li>



<li>A lightweight, powerful runtime for containers, used internally by Docker.</li>



<li>Handles the actual low-level execution of containers.</li>
</ul>



<p><strong>docker-buildx-plugin</strong></p>



<ul class="wp-block-list">
<li>BuildKit-powered Docker build plugin</li>



<li>Adds docker buildx functionality for advanced builds, multi-arch images, and caching strategies.</li>



<li>Useful when building complex container images.<br></li>
</ul>



<p><strong>Note:</strong> In some documentation you will see that the <em>sudo apt install…</em>  command will include also the <em>docker-compose-plugin</em>. The docker-compose-plugin is not required here because we are using the docker-compose stand alone packed (see below). The docker-compose-plugin is integrated into the Docker CLI and can replace the docker-compose standalone binary. But we use the standalone version because of the lightweight minimal install, the backward compatibility and the easy and independent manual version control. </p>



<p>It is highly recommended to omit the <em>docker-compose-plugin</em> from your apt install command if you plan to install the standalone version of Docker Compose binary manually as we will do later. If you have both versions installed this can cause confusion, especially if scripts assume one or the other. Also, Docker might prioritize the plugin version in newer setups which might cause conflicts in our preferred standalone Docker Compose setup. The following table illustrates the problem because the command styles differ only very little. </p>



<figure class="wp-block-table"><table><thead><tr><th>
				Type
			</th><th>
				Command Style
			</th><th>
				Notes
			</th></tr></thead><tbody><tr><td>
				Plugin version
			</td><td>
				<code>docker compose</code>
			</td><td>
				Comes as <code>docker-compose-plugin</code>, tied to Docker CLI
			</td></tr><tr><td>
				Standalone version
			</td><td>
				<code>docker-compose</code>
			</td><td>
				Installed separately, as an independent binary
			</td></tr></tbody></table><figcaption class="wp-element-caption">Docker Compose plugin versus standalone</figcaption></figure>



<p>In case the docker-compose-plugin has been installed on your system you can remove it with the following command:</p>



<pre class="wp-block-code"><code>sudo apt remove docker-compose-plugin
</code></pre>



<p>This removes the plugin version that integrates into the <em>docker compose</em> command. Later when we installed the standalone version of docker compose we use the commands instead of the <em>space</em> with a <em>dash</em> like <em>docker-compose</em>. </p>



<p>Verify that Docker is installed correctly:</p>



<pre class="wp-block-code"><code>sudo docker --version
</code></pre>



<p>Enable and start the Docker service:</p>



<pre class="wp-block-code"><code>sudo systemctl enable docker
sudo systemctl start docker
</code></pre>



<p>Test Docker by running the hello-world image.</p>



<pre class="wp-block-code"><code>sudo docker run hello-world
</code></pre>



<p>This command is a quick test to verify that Docker is installed and working correctly. </p>



<p><strong> sudo</strong></p>



<ul class="wp-block-list">
<li>Runs the command with superuser (root) privileges.</li>



<li>Required unless your user is in the docker group.</li>



<li>Docker needs elevated permissions to communicate with the Docker daemon (which runs as root).</li>
</ul>



<p><strong>docker</strong></p>



<ul class="wp-block-list">
<li>The main Docker CLI (Command-Line Interface) tool.</li>



<li>Used to interact with Docker Engine to manage containers, images, networks, volumes, etc.</li>
</ul>



<p><strong>run</strong></p>



<ul class="wp-block-list">
<li>Tells Docker to create a new container and start it based on the <em> hello-world</em> image you specify.<br>It does the following:<br>
<ul class="wp-block-list">
<li>Pulls the image (if it&#8217;s not already downloaded).</li>



<li>Creates a new container from that image.</li>



<li>Starts and runs the container.</li>



<li>Outputs the result and then exits (for short-lived containers like hello-world).</li>
</ul>
</li>
</ul>



<p><strong> hello-world</strong></p>



<ul class="wp-block-list">
<li>This is the name of the Docker image.</li>



<li>It&#8217;s an official image maintained by Docker, specifically designed to test Docker installations.<br></li>
</ul>



<h3 class="wp-block-heading">Install standalone docker-compose from Docker Resources</h3>



<p>Installing the standalone <em>docker-compose</em> is useful when you:</p>



<ul class="wp-block-list">
<li>Need compatibility with legacy tools or scripts</li>



<li>Want to control the exact version</li>



<li>Prefer a lightweight, portable binary<br></li>
</ul>



<p>The following command downloads the latest standalone <em>docker-compose</em> binary and saves it to a system-wide location.</p>



<pre class="wp-block-code"><code>sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
</code></pre>



<p> Let&#8217;s break down the command step by step:</p>



<p><strong>sudo</strong></p>



<ul class="wp-block-list">
<li>Runs the command with root privileges.<br>
<ul class="wp-block-list">
<li>Required because /usr/local/bin is a protected directory that only root can write to.</li>
</ul>
</li>
</ul>



<p><strong>curl</strong></p>



<ul class="wp-block-list">
<li>A command-line tool used to download files from the internet.</li>
</ul>



<p><strong>-L</strong></p>



<ul class="wp-block-list">
<li>Tells curl to follow redirects.</li>



<li>GitHub uses redirects for release URLs, so this flag ensures the final binary is actually downloaded.</li>
</ul>



<p><strong> &#8222;https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)“</strong></p>



<p>URL with Substitution. This ensures the correct binary is downloaded for your system. This is the dynamic download URL for the latest <em>docker-compose</em> release.</p>



<ul class="wp-block-list">
<li><em>$(uname -s)</em> returns the operating system name (e.g., Linux, Darwin).</li>



<li><em>$(uname -m)</em> returns the architecture (e.g., x86<em>64, arm64).</em></li>
</ul>



<p>Example Output: <em> https://github.com/docker/compose/releases/latest/download/docker-compose-Linux-x86<em>64</em></em></p>



<p><strong> -o /usr/local/bin/docker-compose </strong></p>



<ul class="wp-block-list">
<li>Tells curl to write (<em>-o</em> option; output) the downloaded file to <em>/usr/local/bin/docker-compose</em></li>



<li>This is a standard location for user-installed binaries that are globally available in the system PATH.<br></li>
</ul>



<p>After you run the command above you must do this. Give execution permissions. You’ll need to make the binary executable:</p>



<pre class="wp-block-code"><code>sudo chmod +x /usr/local/bin/docker-compose
</code></pre>



<p>And then check the version to confirm it worked:</p>



<pre class="wp-block-code"><code>docker-compose --version
</code></pre>



<p><strong>Note:</strong> In some cases it might be necessary to switch to the <em>docker compose plugin</em>. This mean you must remove the standalone version from your system and install the <em>docker compose plugin</em> instead. Here is how you should proceed in such a scenario:</p>



<pre class="wp-block-code"><code>#find our where docker-compose has been installed
which docker-compose
/usr/bin/docker-compose

#remove file docker-compose 
sudo rm /usr/bin/docker-compose

#intstall docker compose plugin via docker
sudo apt install docker-compose-plugin
</code></pre>



<h3 class="wp-block-heading">Final Verification</h3>



<p>Check if everything is installed correctly:</p>



<pre class="wp-block-code"><code>docker --version
docker-compose --version
node -v
npm -v
</code></pre>



<h3 class="wp-block-heading">Host-Docker-Setup for nodejs app behind nginx</h3>



<p>The code of the nodejs app is explained in detail in the article <em><a href="https://digitaldocblog.com/webdesign/sample-bootstrap-website-running-as-nodes-app/" title="Sample Bootstrap Website running as nodejs app">Bootstrap Website running as nodejs app</a></em>. </p>



<p>We have a simple website <em>funtrails</em> build in a one-page <em>index.html</em> with two sections: one section showing pictures of a Paterlini bike and one for showing pictures of a Gianni Motta bike. Each section containing an image gallery and text. The images are stored in an <em>images</em> directory. </p>



<pre class="wp-block-code"><code>  ├── .
	├── images
	├── index.html
</code></pre>



<p>Now we want to make this <em>funtrails</em> website run as a nodejs app behind a nginx reverse proxy. Both, the <em>funtrails</em> nodejs app and <em>nginx</em> should run in docker containers composed to work together. Therefore we crate the following file structure on the Host machine:</p>



<pre class="wp-block-code"><code> ├── node
	├── funtrails
        │    ├─ views    
        │       ├── images
	│	├── index.html
	├── nginx
</code></pre>



<p>We copy all our web-content into the <em>views</em> directory under <em>funtrails</em>. The nodejs app is build in the mainfile <em>app.js</em>. All Dependencies for the nodejs app are defined the file <em> package.json</em>.  </p>



<pre class="wp-block-code"><code> ├── node
	├── funtrails
        │    ├─ app.js
    	│    ├─ package.json
        │    ├─ views    
        │       ├── images
	│	├── index.html
	├── nginx
</code></pre>



<p>In the <em>Terminal</em> go into the <em>node/funtrails</em> directory. Install the dependencies.</p>



<pre class="wp-block-code"><code>npm install
</code></pre>



<p>Then we have the following structure.</p>



<pre class="wp-block-code"><code>funtrails
├── app.js
├── node_modules
├── package.json
├── package-lock.json
└── views
    ├── images
    └── index.html
</code></pre>



<p>Go to <em>node/funtrails</em>. Run a test with the following command. </p>



<pre class="wp-block-code"><code>node app.js

nodejs funtrails demo app listening on port 8080!
</code></pre>



<p>Switch to your browser and hit <em>http://localhost:8080</em> to see if all is working as expected. With <em>Ctrl c</em> in the terminal you can stop the app.</p>



<p>The <em>nginx</em> server will be started with the configuration of a <em>nginx.conf</em> file. The file <em>nginx.conf</em> will be created under <em>nginx</em>.</p>



<p><strong>Note:</strong> This file <em>nginx.conf</em> is only for testing and will be changed when we add the SSL/TLS certificates. In this configuration below the <em>nginx</em> server will listen on port 80 and pass requests received from port 80 to the <em>nodejs</em> app listening on port 8080. Processing requests from port 80 is not state of the art as these connections are not encrypted. In production we need a port 443 connection with SSL/TLS for encrypted connections.  </p>



<pre class="wp-block-code"><code>#nginx.conf

events {}

http {
	#Service node-app from docker-compose.yml
    upstream node_app {
        server node-app:8080;  
    }

    server {
        listen 80;

        location / {
            proxy_pass http://node_app;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}
</code></pre>



<p><strong>events {}</strong></p>



<ul class="wp-block-list">
<li>This is a required block in NGINX configuration, even if empty.</li>



<li>It handles connection-related events (like concurrent connections), but you don&#8217;t need to configure it unless you have advanced use cases.</li>
</ul>



<p><strong>http {</strong></p>



<ul class="wp-block-list">
<li>Starts the HTTP configuration block — this is where you define web servers, upstreams, headers, etc.</li>
</ul>



<p><strong>upstream</strong></p>



<ul class="wp-block-list">
<li>Defines a group of backend servers (can be one or many).</li>



<li><em>node_app</em> is just a name.</li>



<li>Inside: server <em>node-app:8080;</em> means:<br>
<ul class="wp-block-list">
<li>Forward traffic to the container with hostname <em>node-app</em></li>



<li>Use port 8080 (that&#8217;s where your <em>nodejs</em> app listens)</li>



<li><em>node-app</em> should match the docker-compose service name (<em>node-app</em> must be declared in your docker-compose.yml which will be explained below).</li>



<li>This lets you use <em>proxy_pass http://node<em>app</em></em> later, instead of hardcoding an IP or port.</li>
</ul>
</li>
</ul>



<p><strong>server</strong></p>



<ul class="wp-block-list">
<li>Defines a virtual server (a website or domain).</li>



<li>listen 80; tells NGINX to listen for HTTP (port 80) traffic.</li>
</ul>



<p><strong>location</strong></p>



<ul class="wp-block-list">
<li>Defines a rule for requests to / (the root URL of your site).</li>



<li>You could add more location blocks for <em>/api, /images, etc.</em> if needed.</li>



<li>Inside: <br>
<ul class="wp-block-list">
<li>proxy_pass:<br>
<ul class="wp-block-list">
<li><em>proxy_pass http://node_app;</em> Tells NGINX to forward requests to the backend defined in upstream <em>node<em>app</em></em></li>



<li>So: if you go to <em>http://yourdomain.com/</em> NGINX proxies that to <em>http://node-app:8080</em></li>
</ul>
</li>
</ul>
</li>



<li>proxy<em>set</em>header (see table)</li>
</ul>



<figure class="wp-block-table"><table><thead><tr><th>
				Header
			</th><th>
				Meaning
			</th></tr></thead><tbody><tr><td>
				<code>Host</code>
			</td><td>
				Preserves the original domain name from the client
			</td></tr><tr><td>
				<code>X-Real-IP</code>
			</td><td>
				The client’s real IP address
			</td></tr><tr><td>
				<code>X-Forwarded-For</code>
			</td><td>
				A list of all proxies the request passed through
			</td></tr><tr><td>
				<code>X-Forwarded-Proto</code>
			</td><td>
				Tells backend whether the request was via HTTP or HTTPS
			</td></tr></tbody></table><figcaption class="wp-element-caption">Nginx Proxy Variables</figcaption></figure>



<p>The <em>$variables</em> in <em>nginx.conf</em> are built-in variables that NGINX provides automatically. They are dynamically set based on the incoming HTTP request. So these variables come from the NGINX core HTTP module and you don’t need to define them or import anything. They are always available in the config. </p>



<p>Here&#8217;s what each one is and where it comes from:</p>



<p><strong>$host</strong></p>



<ul class="wp-block-list">
<li>The value of the Host header in the original HTTP request.</li>



<li>Example: If the user visits http://example.com, then $host is example.com.</li>



<li>Use case: Tells the backend app what domain the client used — useful for apps serving multiple domains.</li>
</ul>



<p><strong>$remote_addr</strong></p>



<ul class="wp-block-list">
<li>The IP address of the client making the request.</li>



<li>Example: If someone from IP 203.0.113.45 visits your site, this variable is set to 203.0.113.45.</li>



<li>Use case: Useful for logging, rate limiting, or geolocation in the backend app.</li>
</ul>



<p><strong>$proxy_add_x_forwarded_for</strong></p>



<ul class="wp-block-list">
<li>A composite header that appends the client&#8217;s IP to the existing X-Forwarded-For header.</li>



<li>Use case: Maintains a full list of proxy hops (useful if your request goes through multiple reverse proxies).</li>



<li>If X-Forwarded-For is already set (by another proxy), it appends $remote_addr to it; otherwise, it sets it to $remote_addr.</li>
</ul>



<p><strong>$scheme</strong></p>



<ul class="wp-block-list">
<li>The protocol used by the client to connect to NGINX — either http or https.</li>



<li>Example: If the user visits https://example.com, then $scheme is https.</li>



<li>Use case: Lets your backend know whether the original request was secure or not.<br></li>
</ul>



<p>Then we have the following structure.</p>



<pre class="wp-block-code"><code> ├── node
	├── funtrails
	│    ├─ app.js
    	│    ├─ package.json
        │    ├─ views    
        │       ├── images
	│	├── index.html
	├── nginx
            ├── nginx.conf
</code></pre>



<h3 class="wp-block-heading">Create Docker Image and Container for nodejs app </h3>



<p>This is to create the Docker Image with <em>docker build</em>. Then we run the Container from the image with <em>docker run</em> to test if everything is working as expected. If everything goes well we can go ahead with the <em>nginx</em> configuration and then with the composition of all together with <em>docker-compose</em>. </p>



<p>The <em>dockerization</em> process of the nodejs app in <em>node/funtrails</em> directory is controlled by the <em>Dockerfile</em> which will be created in <em>node/funtrails</em>. The <em>dockerization</em> process has the following steps.</p>



<ol class="wp-block-list">
<li>Image creation</li>



<li>Container creation from the image<br></li>
</ol>



<p>The Container can then be started, stopped and removed using terminal commands.</p>



<p>Go into in <em>node/funtrails</em>.</p>



<p>To get an Overview and check the Docker status of the system.</p>



<p>List all images. No images on your system.</p>



<pre class="wp-block-code"><code>sudo docker image ls

REPOSITORY   TAG       IMAGE ID   CREATED   SIZE  
</code></pre>



<p>List all Containers. As expected no Containers on the system.</p>



<pre class="wp-block-code"><code>sudo docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
</code></pre>



<p>List an Overview about Docker Images and Containers on your system. As expected no Containers and no Images on the system.</p>



<pre class="wp-block-code"><code>sudo docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          0         0         0B        0B
Containers      0         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     16        0         47.39MB   47.39MB
</code></pre>



<p>Create a <em>Dockerfile</em> in <em>node/funtrails/Dockerfile</em>. This file is required to build the image and run the Container.</p>



<pre class="wp-block-code"><code>#Dockerfile

#Image Build
#Install nodejs 12.22.9 for the Container
FROM node:12.22.9

#Set workdirectory for the Container
WORKDIR /home/node/app

#Change Owner and Group for Container Workdirectory to node
RUN chown node:node /home/node/app

#Run the Container with User node
USER node

#Copy all files from HOST Dir to the Container workdirectory 
COPY --chown=node:node . .

#(After COPY) Run the command to create Image for the Container
RUN npm install

#Container Start
#Open Port 8080 when the Container starts
EXPOSE 8080

#RUN the command when the Container starts
CMD &#91; "node", "app.js" ]
 
</code></pre>



<p>Create a <em>.dockerignore</em> file in <em>node/funtrails/.dockerignore</em>. The <em>hidden dockerignore</em> file exclude only files from the Host that will be copied into the image for the Container with the command <em>COPY . .</em> </p>



<pre class="wp-block-code"><code>#.dockerignore
node_modules
</code></pre>



<p><strong>Note:</strong> In case you copy files from the Host into the image directly with I.e. <em>COPY &lt;file.1&gt; &lt;file.2&gt;</em> then <em>file.1</em> and <em>file.2</em> would be copies even if they would be listed in <em>dockerignore</em>.</p>



<p>We have the following structure on the Host machine.</p>



<pre class="wp-block-code"><code>funtrails
├─ Dockerfile
├─ .dockerignore
├── app.js
├── node_modules
├── package.json
├── package-lock.json
└── views
    ├── images
    └── index.html
</code></pre>



<p>Still be in <em>node/funtrails</em>. </p>



<p><strong>Build the Docker image</strong> from the <em>Dockerfile</em> with the image name <em>node-demo</em>. The dot (.) at the end set the current directory on the Host machine for the <em>docker</em> command. This is location where <em>docker</em> is looking for the <em>Dockerfile</em> to build the image.</p>



<pre class="wp-block-code"><code>sudo docker build -t node-demo .
</code></pre>



<p>List all Docker images. 1 images just created.</p>



<pre class="wp-block-code"><code>sudo docker image ls

REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
node-demo    latest    c353353f045e   22 seconds ago   944MB
</code></pre>



<p><strong>Run the Container</strong> from the image with the name <em>node-demo</em> and give the Container the name <em>funtrails-solo-demo</em>.</p>



<pre class="wp-block-code"><code>sudo docker run -d -p 8080:8080 --name funtrails-solo-demo node-demo
</code></pre>



<p>List all Docker Containers with the option <em>-a</em>. 1 Container with the name <em>funtrails-solo-demo</em> running from the image <em>node-demo</em>.</p>



<pre class="wp-block-code"><code>sudo docker ps -a
</code></pre>



<p>Access the running app on port 8080.</p>



<pre class="wp-block-code"><code>sudo curl http://localhost:8080
</code></pre>



<p>If everything went well you get a feedback in the terminal showing the HTML code. In this case the Test was successful.</p>



<p>Stop the running Docker Container with the name <em>funtrails-solo-demo</em>.</p>



<pre class="wp-block-code"><code>sudo docker stop funtrails-solo-demo
</code></pre>



<p>List all Containers with the option <em>-a</em>. 1 Container from the image <em>node-demo</em> with the name <em>funtrails-solo-demo</em> is <em>EXITED</em>.</p>



<pre class="wp-block-code"><code>sudo docker ps -a
</code></pre>



<p>List all images. Still 1 image available.</p>



<pre class="wp-block-code"><code>sudo docker image ls

REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
node-demo    latest    c353353f045e   36 hours ago   944MB
</code></pre>



<p>List an Overview about Docker Images and Containers on the system. 1 active Image and 1 <em>not Active</em> Container. Status of the Container is <em>EXITED</em> as we have seen above.</p>



<pre class="wp-block-code"><code>sudo docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          1         1         944MB     0B (0%)
Containers      1         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     18        0         47.39MB   47.39MB
</code></pre>



<p>To clean up your system use the following commands.</p>



<figure class="wp-block-table"><table><thead><tr><th>
				Target
			</th><th>
				Command
			</th></tr></thead><tbody><tr><td>
				Delete exited containers
			</td><td>
				<code>sudo docker container prune</code>
			</td></tr><tr><td>
				Delete unused images
			</td><td>
				<code>sudo docker image prune</code>
			</td></tr><tr><td>
				Delete unused volumes
			</td><td>
				<code>sudo docker volume prune</code>
			</td></tr><tr><td>
				Complete Housekeeping (attention!)
			</td><td>
				<code>sudo docker system prune -a</code>
			</td></tr></tbody></table><figcaption class="wp-element-caption">Docker commands for clean up</figcaption></figure>



<p>The full clean up (be careful).</p>



<pre class="wp-block-code"><code>sudo docker system prune -a

sudo docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          0         0         0B        0B
Containers      0         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     0         0         0B        0B
</code></pre>



<h3 class="wp-block-heading">Configure docker-compose</h3>



<p>Go back to the <em>node</em> directory and create a <em> docker-compose.yml</em> file there.</p>



<p>Then we have the following structure.</p>



<pre class="wp-block-code"><code> ├── node
     ├── docker-compose.yml
     ├── funtrails
     │	   ├─ Dockerfile
     │	   ├─ .dockerignore
     │     ├─ app.js
     │     ├─ node_modules
     │	   ├─ package.json
     │	   ├─ package-lock.json
     │     ├─ views    
     │         ├── images
     │         ├── index.html
     ├── nginx
           ├── nginx.conf
</code></pre>



<p><em>docker-compose</em> is a tool that helps you define and run multi-container Docker applications using a YAML file. Instead of running multiple docker run commands, you describe everything in one file and start it all with the command <em>docker-compose up</em>.</p>



<p>Create <em> docker-compose.yml</em> with the following content.</p>



<pre class="wp-block-code"><code>#docker-compose.yml

services:
  funtrails:
    build: ./funtrails
    container_name: funtrails 
    networks:
      - funtrails-network

  nginx:
    image: nginx:latest
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - funtrails
    networks:
      - funtrails-network

networks:
  funtrails-network:
    driver: bridge
</code></pre>



<p><strong>services</strong></p>



<p>This section defines containers that make up your app</p>



<ul class="wp-block-list">
<li>funtrails<br>
<ul class="wp-block-list">
<li>build: ./funtrails<br>Builds the image from the Dockerfile inside the ./funtrails directory.</li>



<li>container_name: funtrails<br>Names the container funtrails instead of a random name.</li>



<li>networks: funtrails-network<br>Connects the container to a custom user-defined network. </li>
</ul>
</li>



<li>nginx</li>



<li>image: nginx:latest<br>Uses the official latest NGINX image.</li>



<li>container_name: nginx-proxy<br>Container will be named nginx-proxy.</li>



<li>ports: &#8222;80:80&#8220;</li>



<li>Exposes Host port 80 to Container port 80.</li>



<li>volumes:<br>Mounts your local nginx.conf into the container, read-only (:ro).</li>



<li>depends_on: funtrails<br>Ensures funtrails is started before nginx.</li>



<li>networks: funtrails-network<br>Both services are in the same network, so they can communicate by name.</li>
</ul>



<p><strong>networks</strong></p>



<ul class="wp-block-list">
<li>Creates a custom bridge network named funtrails-network.</li>



<li>Ensures containers can resolve each other by name (funtrails, nginx).<br></li>
</ul>



<p><strong>Note:</strong> We are using the official NGINX image directly (<em>image: nginx:latest</em>). This image is prebuilt and includes everything NGINX needs to run. </p>



<p>We don&#8217;t need to write a custom Dockerfile because we don’t  want to:</p>



<ul class="wp-block-list">
<li>Add extra modules</li>



<li>Customize the image beyond just the config</li>



<li>Install additional tools</li>



<li>Include SSL certs directly, etc.</li>
</ul>



<p>Instead, we simply mount our own <em>nginx.conf</em> into the container using a volume. This tells Docker <em>Use the official NGINX image, but replace its config file with mine</em>. We would use a Dockerfile in the <em>nginx</em> directory if we need to build a custom NGINX image, for example to copy SSL certs directly into the image.</p>



<p>Example:</p>



<pre class="wp-block-code"><code>FROM nginx:latest
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./certs /etc/nginx/certs
</code></pre>



<p>But for most use cases like reverse proxying a nodejs app, just mounting your own config file is perfectly sufficient and simpler.</p>



<p><strong>Note:</strong>We integrate SSL in the next chapter using free <em>Lets Encrypt</em> certificates.</p>



<h3 class="wp-block-heading">Integrate SSL certificates &#8211; free Lets Encrypt</h3>



<p>To integrate SSL we need to do the following steps:</p>



<ol class="wp-block-list">
<li>Prepare your Domain</li>



<li>Install <em>certbot</em></li>



<li>Create Lets Encrypt SSL certificates</li>



<li>Adapt your <code>node/docker-compose.yml</code></li>



<li>Adapt your <code>node/nginx/nginx.conf</code></li>



<li>Create a cron-Job to renew SSL certificates<br></li>
</ol>



<p><strong>prepare the domain</strong></p>



<p>You must own a domain like <em>example.com</em> and you must have access to your DNS-servers to adapt the <em>A-Record</em>. Here in my example I create a subdomain <em>funtrails.example.com</em> and create on my DNS an <em>A-Record</em> for <em>funtrails.example.com</em> that point to the servers IP-Adress.</p>



<p><strong>install certbot</strong></p>



<p>To install our certificates for SSL we use a tool called <em>certbot</em>. We install <em>certbot</em> with <code>apt</code> on our Linux machine.</p>



<pre class="wp-block-code"><code>sudo apt update
sudo apt install certbot
</code></pre>



<p><strong>create Lets Encrypt SSL certificates</strong></p>



<p>We create the SSL certificates with certbot.</p>



<pre class="wp-block-code"><code>sudo certbot certonly --standalone -d funtrails.example.com

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Enter email address (used for urgent renewal and security notices)
 (Enter 'c' to cancel): &lt;your-email&gt;@funtrails.example.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.5-February-24-2025.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
(Y)es/(N)o: Y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
Account registered.
Requesting a certificate for funtrails.example.com

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/funtrails.example.com/fullchain.pem

Key is saved at:         /etc/letsencrypt/live/funtrails.example.com/privkey.pem

This certificate expires on 2025-07-25.

These files will be updated when the certificate renews.

Certbot has set up a scheduled task to automatically renew this certificate in the background.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
If you like Certbot, please consider supporting our work by:
 * Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 * Donating to EFF:                    https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
</code></pre>



<p>To use SSL you need </p>



<ul class="wp-block-list">
<li>a server certificate (e.g. certificate.crt)</li>



<li>a private key (e.g. private.key) and </li>



<li>the CA certificate (e.g. ca.crt).</li>
</ul>



<figure class="wp-block-table"><table><thead><tr><th>
				File
			</th><th>
				Description
			</th><th>
				Comment
			</th></tr></thead><tbody><tr><td>
				<strong>private.key</strong>
			</td><td>
				Private secret key &#8211; keep this key strictly secret !
			</td><td>
				Only your server knows this key
			</td></tr><tr><td>
				<strong>certificate.crt</strong>
			</td><td>
				Your server certificate (proves your identity)
			</td><td>
				Issued by the CA (Let&#8217;s Encrypt)
			</td></tr><tr><td>
				<strong>ca.crt / chain.crt</strong>
			</td><td>
				The certificate chain up to the root CA
			</td><td>
				So that clients trust your certificate
			</td></tr></tbody></table><figcaption class="wp-element-caption">SSL standard certificates</figcaption></figure>



<p><em>certbot</em> create these file in the following directory on your Host server.</p>



<pre class="wp-block-code"><code>/etc/letsencrypt/live/funtrails.example.com
</code></pre>



<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/live/funtrails.example.com

cert.pem -&gt; ../../archive/funtrails.example.com/cert1.pem
chain.pem -&gt; ../../archive/funtrails.example.com/chain1.pem
fullchain.pem -&gt; ../../archive/funtrails.example.com/fullchain1.pem
privkey.pem -&gt; ../../archive/funtrails.example.com/privkey1.pem
</code></pre>



<p>The translation to the standard is as follows.</p>



<figure class="wp-block-table"><table><thead><tr><th>
				File
			</th><th>
				Description
			</th></tr></thead><tbody><tr><td>
				<code>privkey.pem</code>
			</td><td>
				Your private key (= private.key)
			</td></tr><tr><td>
				<code>cert.pem</code>
			</td><td>
				Your server certificate (= certificate.crt)
			</td></tr><tr><td>
				<code>chain.pem</code>
			</td><td>
				The CA certificates (= ca.crt)
			</td></tr><tr><td>
				<code>fullchain.pem</code>
			</td><td>
				Server certificate + CA chain together
			</td></tr></tbody></table><figcaption class="wp-element-caption">LetsEncrypt translation to standard SSL certificates</figcaption></figure>



<p><strong>adapt node/docker-compose.yml</strong></p>



<p>The <em>docker-compose.yml</em> will be adapted as follows.</p>



<pre class="wp-block-code"><code>services:
  funtrails:
    build: ./funtrails
    container_name: funtrails
    networks:
      - funtrails-network

  nginx:
    image: nginx:latest
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - /etc/letsencrypt:/etc/letsencrypt:ro
    depends_on:
      - funtrails
    networks:
      - funtrails-network

networks:
  funtrails-network:
    driver: bridge
</code></pre>



<p>We create a bridge network with the name <em>funtrails-network</em> and both services are running in this network. This is important to reach all services by their container name(s). </p>



<p>The <em>funtrails</em> service will rebuild from the <em>Dockerfile</em> in <em>./funtrails</em>. For the <em>nginx</em> service the <em>nginx-image</em> will be loaded from the Docker resources in the latest version. For the <em>nginx</em> image it is defined that the Host ports 80 and 443 will be mapped into the Container ports 80 and 443. When the Container is started we mount the Host files <em>./nginx/nginx.conf</em> and the SSL certificates unter <em>/etc/letsencrypt</em> into the Container. Both will be loaded with read only!</p>



<pre class="wp-block-code"><code>...
volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - /etc/letsencrypt:/etc/letsencrypt:ro
...
</code></pre>



<p>With the <em>depends<em>on</em></em> directive we declare that first the <em>funtrails</em> service must be started and then <em>nginx</em>. </p>



<p><strong>adapt node/nginx/nginx.conf</strong></p>



<p>The file will be adapted as follows.</p>



<pre class="wp-block-code"><code>events {
  worker_connections 1024; 
}

http {

    server {
      listen 80;
      server_name funtrails.example.com;

      # Redirect HTTP -&gt; HTTPS
      return 301 https://$host$request_uri;
    }

   server {
    listen 443 ssl;
    server_name funtrails.example.com;

    ssl_certificate /etc/letsencrypt/live/funtrails.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/funtrails.example.com/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;

    location / {
        proxy_pass http://funtrails:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}

</code></pre>



<p>The <em>events{}</em> Block is a required block in NGINX configuration, even if empty. It handles connection-related events (like concurrent connections), but you don&#8217;t need to configure it unless you have advanced use cases. Here I configured 1024 concurrent connections. </p>



<p>Within the <em>http</em> block we have 2 virtual server-blocks. The first server-block define that server <em> funtrails.example.com</em> is listening to port 80 (HTTP) but all requests to this port will immediately be redirected to port 443 (HTTPS). The second server-block define that server <em> funtrails.example.com</em> is listening also to port 443 (HTTPS) followed by the location of the SSL certificate and SSL key on our local Host and the protocol definition. </p>



<p>The location-block define a rule for requests to / (the root URL of your site). You could add more location blocks i.e. for /api, /images, etc. if needed. In this config, we are skipping the upstream block and directly writing <em>proxy<em>pass</em></em>. The <em>proxy<em>pass</em></em> tells NGINX to forward requests to port 8080 of the backend service defined in <em>docker-compose.yml</em>. This backend service in <em>docker-compose.yml</em> is defined with the <em>container_name</em> directive which is set to <em>funtrails</em>. </p>



<pre class="wp-block-code"><code>...
services:
  funtrails:
	build: ./funtrails
    container_name: funtrails
	networks:
	   - funtrails-network
... 
</code></pre>



<p>Docker Compose creates an internal Docker network <em>funtrails-network</em>, and all services can reach each other by their service name(s) as hostname(s). So nginx can resolve <em>funtrails</em> because it&#8217;s part of the same Docker network (no need for a manual upstream block).</p>



<p>These other <em>$variables</em> come from NGINX&#8217;s core HTTP module, so we don’t need to define them. They are always available in the config. </p>



<p><strong>$host</strong></p>



<ul class="wp-block-list">
<li>What it is: The value of the Host header in the original HTTP request.</li>



<li>Example: If the user visits http://example.com, then $host is example.com.</li>



<li>Use case: Tells the backend app what domain the client used — useful for apps serving multiple domains.</li>
</ul>



<p><strong>$remote<em>addr</em></strong></p>



<ul class="wp-block-list">
<li>What it is: The IP address of the client making the request.</li>



<li>Example: If someone from IP 203.0.113.45 visits your site, this variable is set to 203.0.113.45.</li>



<li>Use case: Useful for logging, rate limiting, or geolocation in the backend app.</li>
</ul>



<p><strong>$proxy<em>add</em>x<em>forwarded</em>for</strong></p>



<ul class="wp-block-list">
<li>What it is: A composite header that appends the client&#8217;s IP to the existing X-Forwarded-For header.</li>



<li>Use case: Maintains a full list of proxy hops (useful if your request goes through multiple reverse proxies).</li>



<li>How it works: If X-Forwarded-For is already set (by another proxy), it appends $remote<em>addr to it; otherwise, it sets it to $remote</em>addr.</li>
</ul>



<p><strong>$scheme</strong></p>



<ul class="wp-block-list">
<li>What it is: The protocol used by the client to connect to NGINX — either http or https.</li>



<li>Example: If the user visits https://example.com, then $scheme is https.</li>



<li>Use case: Lets your backend know whether the original request was secure or not.<br></li>
</ul>



<p><strong> Create a cron-Job to renew Lets Encrypt SSL certificates</strong></p>



<p>Lets Encrypt SSL Certificates must be renewed after 90 days. <em>certbot</em> can renew your certificates. To automize the renewal you can create a cronjob on your Host machine. </p>



<p><strong>Note:</strong> Sometimes cron doesn&#8217;t know where docker-compose is located (because the environment variables are missing). Therefore, it&#8217;s safer to use the full paths in crontab. You check the relevant paths as follows:</p>



<pre class="wp-block-code"><code>which docker-compose
/usr/bin/docker-compose

which certbot
/usr/bin/certbot
</code></pre>



<p>The create a cronjob in crontab of the user root (use sudo):</p>



<pre class="wp-block-code"><code>sudo crontab -e 

0 3 *&nbsp;* *&nbsp;	/usr/bin/certbot renew --quiet &amp;&amp; /usr/bin/docker-compose restart nginx
</code></pre>



<p>With <em>sudo crontab -e </em> you create a crontab for the user root. All commands within the root crontab will be executed with root privileges.  </p>



<p><em>certbot renew</em> checks all certificates for expiration dates and automatically renews them.</p>



<p><em>docker-compose restart nginx</em> ensures that nginx is reloaded so that it can apply the new certificates. Otherwise, nginx would still be using old certificates even though new ones are available. With the command you call <em>docker-compose restart &lt;service-name&gt;</em>. Here you specify the service name from <em>docker-compose.yml</em> not the container name.</p>



<p><strong>Note:</strong> In case you would call <em>crontab -e</em> (without <em>sudo</em>) you would edit your own user crontab. This crontab then runs under your user, not as root and the tasks in crontab run unter this normal user. When you renew SSL certificates using <em>certbot renew</em> this job must write into the directories under <em>/etc/letsencrypt/</em> on your machine. But these directories are owned by root. So you cannot write to the directories when the crontab runs under a normal user. One might then think that the commands in the crontab of a normal user should be executed with <em>sudo</em>. If a crontab job of your normal user (i.e. patrick) is running and sudo is used in the command, then sudo will attempt to prompt for the password. But there is no terminal in crontab where you can enter a password. Therefore, the command will fail (error in the log, nothing happens). Therefore it is essential here to edit the crontab of the user root with <em>sudo crontab -e</em>. </p>



<p>Finally you can check the crontab of your own or the crontab as root as follows.</p>



<pre class="wp-block-code"><code>crontab -l
no crontab for patrick

sudo crontab -l
# Edit this file to introduce tasks to be run by cron.
# 
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
# 
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
# 
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
# 
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
# 
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# 
# For more information see the manual pages of crontab(5) and cron(8)
# 
# m h  dom mon dow   command

0 3 * * * /usr/bin/certbot renew --quiet &amp;&amp; /usr/bin/docker-compose restart nginx
 
</code></pre>



<p>You can check the logs for the renewal process using the following command.</p>



<pre class="wp-block-code"><code>sudo cat /var/log/letsencrypt/letsencrypt.log
</code></pre>



<h3 class="wp-block-heading">Start the Containers with docker-compose</h3>



<p>Navigate to the directory with <em>docker-compose.yml</em>. Then use the following commands.</p>



<pre class="wp-block-code"><code>sudo docker-compose build

sudo docker-compose up -d
</code></pre>



<p>The command <em>docker-compose build</em> reads the Dockerfile for each service defined in <em>docker-compose.yml</em> and builds the Docker image accordingly. The command <em>docker-compose up -d</em> run the container(s) and the network. This starts all services defined in the <em>docker-compose.yml</em> and links them via the defined docker network. The -d flag runs the containers in the background (detached mode).</p>



<p>Then you can check the status using the following commands.</p>



<pre class="wp-block-code"><code>sudo docker-compose ps

sudo docker ps
</code></pre>



<p>Here is an overview of the most important commands.</p>



<figure class="wp-block-table"><table><thead><tr><th>
				Command
			</th><th>
				Purpose
			</th></tr></thead><tbody><tr><td>
				<code>docker-compose build</code>
			</td><td>
				Build all images from Dockerfiles
			</td></tr><tr><td>
				<code>docker-compose up -d</code>
			</td><td>
				Start containers in the background
			</td></tr><tr><td>
				<code>docker-compose ps</code>
			</td><td>
				See status of containers
			</td></tr><tr><td>
				<code>docker-compose down</code>
			</td><td>
				Stop and remove all containers
			</td></tr><tr><td>
				<code>docker-compose logs -f</code>
			</td><td>
				Follow logs of all services
			</td></tr></tbody></table><figcaption class="wp-element-caption">Docker-compose commands</figcaption></figure>



<h3 class="wp-block-heading">How to manage Changes made to the application code</h3>



<p>When we make changes to the app code i.e. in <em>node/funtrails/app.js</em> or in <em>node/funtrails/Dockerfile</em> we need to rebuild the image for the funtrails service defined in <em>node/docker-compose.yml</em>. In such a change scenario it is not necessary to stop the containers with <em>docker-compose down</em> before you rebuild the image with <em>docker-compose build</em>.</p>



<p>You can rebuild and restart only the <em>funtrails</em> service with the following commands.</p>



<pre class="wp-block-code"><code>docker-compose build funtrails

docker-compose up -d funtrails
</code></pre>



<p>This will:</p>



<ul class="wp-block-list">
<li>Rebuild the <em>funtrails</em> image</li>



<li>Stop the old <em>funtrails</em> container (if running)</li>



<li>Start a new container using the updated image</li>



<li>Without affecting other services like <em>nginx</em><br></li>
</ul>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
