<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Linux &#8211; Digitaldocblog</title>
	<atom:link href="https://digitaldocblog.com/tag/linux/feed/" rel="self" type="application/rss+xml" />
	<link>https://digitaldocblog.com</link>
	<description>Various digital documentation</description>
	<lastBuildDate>Thu, 01 Jan 2026 08:02:37 +0000</lastBuildDate>
	<language>de</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>

 
	<item>
		<title>Run Vaultwarden and Caddy on your Linux Server with docker-compose</title>
		<link>https://digitaldocblog.com/webserver/run-vaultwarden-and-caddy-on-your-linux-server-with-docker-compose-2/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 05 Sep 2025 05:45:33 +0000</pubDate>
				<category><![CDATA[Server]]></category>
		<category><![CDATA[Webserver]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=252</guid>

					<description><![CDATA[Vaultwarden is a very light, easy to use and very well documented alternative implementation of the Bitwarden Client API. It is perfect if you struggle with the complex Bitwarden installation&#8230;]]></description>
										<content:encoded><![CDATA[
<p><a href="https://github.com/dani-garcia/vaultwarden" title="Vaultwarden on GitHub">Vaultwarden</a> is a very light, easy to use and very well documented alternative implementation of the <a href="https://bitwarden.com/help/self-host-bitwarden/" title="Bitwarden Self-Hosted">Bitwarden Client API</a>. It is perfect if you struggle with the complex Bitwarden installation but want to self-host your own password management server and connect your Bitwarden Clients which are installed on your computer or mobile device. In this documentation I describe the steps to configure and run Vaultwarden on a Ubuntu Linux 22.04 using docker-compose services. In parallel you should read the <a href="https://github.com/dani-garcia/vaultwarden/wiki" title="Vaultwarden Wiki">Vaultwarden Wiki</a> to understand the complete background.</p>




<h3 class="wp-block-heading">Prepare your Server</h3>



<p>Before we start we need to prepare the server. In this step we create the environment to manage the vaultwarden instance. Login to your server using your standard User (not root).</p>




<h4 class="wp-block-heading">Basic requirements</h4>



<p>Login to your server with SSH and authenticate with keys. Never use simple password authentication. You must create a private and a public SSH key-pair on your Host Machine and copy only the public SSH key to your Remote Server. Keep your private key safe on your Host machine. Then copy your public key to your Remote Server and configure your ssh-deamon on your Remote Server. Disable password authentication and root login in your configuration on your Remote Server. How all this works is very good explained on <a href="https://linuxize.com/post/how-to-set-up-ssh-keys-on-ubuntu-1804/" title="Linuxize.com">Linuxize.com</a>.</p>




<p>You must ensure that SSL certificates for your server are installed. I use free <a href="https://letsencrypt.org/" title="Letsencrypt - Encryption for Everybody">LetsEncrypt</a> certificates and <em>certbot</em> to install and renew my certificates on the system. Therefore pls. read on my <a href="https://digitaldocblog.com/" title="Digitaldocblog - Patrick Rottländer nurdy Ressources">Digitaldocblog</a> in the Article <a href="https://digitaldocblog.com/webserver/ssl-certificates-with-lets-encrypt-and-certbot-on-a-linux-server/" title="SSL Certificates with Lets Encrypt and certbot on a Linux Server"> SSL Certificates with Lets Encrypt and certbot on a Linux Server </a>. Here you find a very detailed description of how you can do this .</p>




<p>You must ensure that docker and docker-compose is installed on your system. Therefore pls. read on my <a href="https://digitaldocblog.com/" title="Digitaldocblog - Patrick Rottländer nurdy Ressources">Digitaldocblog</a> a very detailed description of how you can do this in the Article <a href="https://digitaldocblog.com/webserver/containerize-a-nodejs-app-and-nginx-with-docker-on-ubuntu-2204/" title="Containerize a nodejs app with nginx">Containerize a nodejs app with nginx</a>. You should read the following chapters:</p>




<ul class="wp-block-list">
	<li>Prepare the System for secure Downloads from Docker</li>
	<li>Install Docker from Docker Resources</li>
	<li>Install standalone docker-compose from Docker Resources<br></li>
</ul>



<p>Make sure that your server is running behind a firewall. In my case I have a virtual server and I am responsible for server security. Therefore I install and configure a firewall on my system. </p>




<p>Before you configure the firewall be sure that your ssh service is running and on which port your ssh service is running. This is important to know because you don’t want to lock yourself out. First you check if the ssh service is running with <em> systemctl </em> and then on which port ssh is running using <em>netstat</em>. If <em>netstat</em> is not installed on your system you can do this with <em>apt</em>. </p>




<pre class="wp-block-code"><code>#control ssh status
sudo systemctl status ssh

#check ssh port
sudo netstat -tulnp | grep ssh

#check if net-tools (include netstat) are installed
which netstat

#install net-tools only in case not installed
sudo apt install net-tools
</code></pre>



<p>I install <em>ufw</em> (uncomplicated firewall) on my server and configure it to provide only SSH and HTTPS to the outside world. </p>




<pre class="wp-block-code"><code># check if ufw is installed
ufw version
which ufw

#install ufw if not installed
sudo apt install ufw

#open SSH and HTTPS
sudo ufw allow OpenSSH
sudo ufw allow 443
sudo ufw allow 80/tcp

#Default rules
sudo ufw default deny incoming
sudo ufw default allow outgoing

#Start the firewall
sudo ufw enable

#Check the firewall status
sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (OpenSSH)           ALLOW IN    Anywhere                  
443                        ALLOW IN    Anywhere                  
80/tcp                     ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))      ALLOW IN    Anywhere (v6)             
443 (v6)                   ALLOW IN    Anywhere (v6)             
80/tcp (v6)                ALLOW IN    Anywhere (v6)

</code></pre>



<h4 class="wp-block-heading">Port 443 must run TCP and UDP</h4>



<p>In the Docker environment, we will later configure Caddy as a reverse proxy for the vaultwarden service. Caddy requires both TCP and UDP on port 443. This is due to a modern protocol called HTTP/3 (QUIC).</p>




<p>Traditionally, the web (HTTP/1.1 and HTTP/2) ran exclusively over the TCP protocol. TCP is very reliable, but sometimes a bit slow when establishing a connection. Google and others subsequently developed QUIC, on which the new standard HTTP/3 is based. QUIC uses UDP instead of TCP.</p>




<p>HTTP/3 QUIC is significantly faster when establishing a connection (handshake). It handles packet loss better, which is especially important for mobile connections on smartphones, for example, when using the Bitwarden app.</p>




<p>Caddy is a very modern reverse proxy that has HTTP/3 enabled by default. The &#8222;normal&#8220; HTTPS connection (HTTP/1.1 or HTTP/2) is established over TCP port 443. Caddy offers clients (Browsers or the Bitwarden app) the option to switch to HTTP/3 via UDP port 443.</p>




<p>With vaultwarden, users often access their vaults via smartphones using LTE or Wi-Fi connections. When UDP port 443 is open, the app uses HTTP/3. This results in a more stable connection and improved vault synchronization due to reduced latency.</p>




<h4 class="wp-block-heading">Why must Port 80 be open</h4>



<p>When you follow the instructions in the Article <a href="https://digitaldocblog.com/webserver/ssl-certificates-with-lets-encrypt-and-certbot-on-a-linux-server/" title="SSL Certificates with Lets Encrypt and certbot on a Linux Server"> SSL Certificates with Lets Encrypt and certbot on a Linux Server </a> then you are using <em>certbot</em> and install your <a href="https://letsencrypt.org/" title="Letsencrypt - Encryption for Everybody">LetsEncrypt</a> certificates on your local server with the following <em>certbot</em> command :</p>




<pre class="wp-block-code"><code>sudo certbot certonly --standalone -d &lt;yourDomain&gt;
</code></pre>



<p>Here you use the method <em>standalone</em>. Whenever you renew your certificates  <em>certbot</em> starts a temporary web server to prove ownership of your domain to <a href="https://letsencrypt.org/" title="Letsencrypt - Encryption for Everybody">LetsEncrypt</a>.</p>




<p>When renewing your certificates <em>certbot</em> attempts to start a temporary web server on port 80. Therefore, port 80 must be open on your system via your firewall rules.</p>




<p>Important: If another web server (such as Nginx or Apache) is already running permanently on port 80, this step will fail unless the service is stopped. So, if a web server is running on port 80, this service must be permanently stopped.</p>




<p>After Certbot successfully starts the web server on port 80, the following happens:</p>




<ul class="wp-block-list">
	<li><strong>ACME Challenge:</strong> Certbot contacts the Let&#8217;s Encrypt servers. These provide Certbot with a random string (token).</li>
	<li><strong>Deployment:</strong> Certbot creates a small file on the spot at the URL <em>http://yourDomain/.well-known/acme-challenge/token</em></li>
	<li><strong>Verification:</strong> The Let&#8217;s Encrypt servers now attempt to access this exact URL via the public internet. If they find the file (and the correct token), it proves that you own the server under this domain.<br></li>
</ul>



<p>Once the verification is successful, the temporary standalone web server is immediately shut down and port 80 is opened. Certbot generates a new private key (if configured) and signs the new certificate. The new certificate files are stored in the directory <em>/etc/letsencrypt/archive/ </em> and the symbolic links in the directory <em>/etc/letsencrypt/live/</em> are updated.</p>




<h4 class="wp-block-heading">Create new user vaultwarden</h4>



<p>You are logged in with your standarduser. From the home directory of the standarduser you create the user vaultwarden and create the hidden directory /home/vaultwarden/.ssh. Then copy the  authorized_keys file from your .ssh directory into the new created .ssh directory of the new user vaultwarden and set the owner and permissions.</p>




<p>The new user vaultwarden should be in sudo group to perform commands under root using sudo. And the user vaultwarden should be in docker group to perform docker commands without using sudo. </p>




<pre class="wp-block-code"><code>#logged-in with standard user and create new user 
sudo adduser vaultwarden

#create hidden .ssh directory in new users home directory
sudo mkdir /home/vaultwarden/.ssh

#copy authorized_keys file to enable ssh key login for new user
cd /home/standarduser/.ssh
sudo cp authorized_keys /home/vaultwarden/.ssh

#set the owner vaultwarden and permissions
sudo chown -R vaultwarden:vaultwarden /home/vaultwarden/.ssh
sudo chmod 700 /home/vaultwarden/.ssh
sudo chmod 600 /home/vaultwarden/.ssh/authorized_keys

#check permissions
ls -al /home/vaultwarden
drwx-- vaultwarden vaultwarden 4096 May 11 14:20 .ssh

ls -l /home/vaultwarden/.ssh
-rw-- vaultwarden vaultwarden 400 May 11 14:20 authorized_keys

#add user vaultwarden to sudo- and docker group
sudo usermod -aG sudo vaultwarden
sudo usermod -aG docker vaultwarden

#check vaultwarden groups (3 groups: vaultwarden sudo docker)
sudo groups vaultwarden
vaultwarden : vaultwarden sudo docker

</code></pre>



<h4 class="wp-block-heading">Create /opt/vaultwarden directory</h4>



<p>Login with the new user vaultwarden. Stay logged in as vaultwarden for the next steps. Do not perform the next steps or the installation of the vaultwarden server with root or any other user. </p>




<p>After you are logged in with vaultwarden you create a new directory /opt/vaultwarden. This is the runtime directory of your vaultwarden application and the place from where the docker containers will be started.</p>




<pre class="wp-block-code"><code>sudo mkdir /opt/vaultwarden
sudo chmod -R 700 /opt/vaultwarden

ls -l /opt/vaultwarden
drwx--vaultwarden vaultwarden 4096 Mai 16 07:06 vaultwarden
 
</code></pre>



<p>Then change into /opt/vaultwarden. You create the /opt/vaultwarden/vw-data directory which is the host directory for the docker containers. One of these containers will be started under the container name vaultwarden. This container vaultwarden run with root privileges and write into this host directory /opt/vaultwarden/vw-data.</p>




<pre class="wp-block-code"><code>mkdir /opt/vaultwarden/vw-data

ls -l /opt/vaultwarden
drwxrwxr-x 6 vaultwarden vaultwarden 4096 Mai 15 15:54 vw-data
`
</code></pre>



<h4 class="wp-block-heading">Create /opt/vaultwarden/certs and copy your SSL certificates</h4>



<p>The container vaultwarden is the web application where you can log-in and manage your passwords. As we will se below the container will be started under the user vaultwarden in /opt/vaultwarden and run with root privileges behind a reverse proxy server. As reverse proxy I will use <a href="https://caddyserver.com" title="Caddy Server Platform">Caddy</a> which is a powerful platform. Caddy manages the requests from the outside world and forward requests via an internal docker network to the vaultwarden server. </p>




<p>Caddy must accept only HTTPS connections and use the standard directory for certificates /opt/vaultwarden/certs. Therefore we create this directory.</p>




<p>I use Letsencrypt certificates which are managed automatically by certbot and stored in the directory /etc/letsencrypt on my host server. The keys are set up the following structure: </p>




<ul class="wp-block-list">
	<li>In /etc/letsencrypt/live/&lt;domain&gt; there are only symlinks that point </li>
	<li>to the real files in /etc/letsencrypt/archive/&lt;domain&gt;. <br></li>
</ul>



<p>Copy the fullchain.pem and privkey.pem files from /etc/letsencrypt/live/&lt;domain&gt; to /opt/vaultwarden/certs and set the permissions accordingly. </p>




<pre class="wp-block-code"><code>#create certs directory
mkdir /opt/vaultwarden/certs

#change into certs directory
cd /opt/vaultwarden/certs

#copy the files
cp /etc/letsencrypt/live/&lt;domain&gt;/fullchain.pem fullchain.pem
cp /etc/letsencrypt/live/&lt;domain&gt;/privkey.pem privkey.pem

#set owner and group vaultwarden
chown vaultwarden:vaultwarden fullchain.pem
chown vaultwarden:vaultwarden privkey.pem

#set access (read write only vaultwarden)
chmod 600 privkey.pem
chmod 600 fullchain.pem

</code></pre>



<p><strong>note:</strong> The cp command will place the actual files (not the symlinks) into the directory /opt/vaultwarden/certs unless you specify a -P or -d option. So the cp command follows the symlinks and grab the actual files behind. This is important when the certificates have been renewed and new certificate files are stored behind the symlinks. </p>




<h3 class="wp-block-heading">Run Vaultwarden and Caddy as docker containers</h3>



<p>The following setup creates a secure, email-enabled Vaultwarden instance behind a Caddy reverse proxy with HTTPS and admin access, running entirely via Docker.</p>




<h4 class="wp-block-heading">Create docker-compose.yml</h4>



<p>This docker-compose.yml file sets up two services Vaultwarden and Caddy to host a self-hosted password manager with HTTPS support.</p>




<pre class="wp-block-code"><code>#docker-compose.yml

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    environment:
      DOMAIN: "&lt;yourDomain&gt;"
      SIGNUPS_ALLOWED: "false"
      SMTP_HOST: "&lt;yourSmtpServer&gt;"
      SMTP_FROM: "&lt;yourEmail&gt;"
      SMTP_FROM_NAME: "&lt;yourName&gt;"
      SMTP_USERNAME: "&lt;yourEmail&gt;"
      SMTP_PASSWORD: "&lt;yourSmtpPasswd&gt;"
      SMTP_SECURITY: "force_tls"
      SMTP_PORT: "465"
      ADMIN_TOKEN: '&lt;yourAdminToken&gt;'
    volumes:
      - ./vw-data:/data

  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    ports:
      - 443:443
      - 443:443/udp 
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy-config:/config
      - ./caddy-data:/data
      - ./certs:/certs:ro
    environment:
      DOMAIN: "&lt;yourDomain&gt;"
      LOG_FILE: "/data/access.log"
</code></pre>



<p><strong>vaultwarden service:</strong></p>




<p>Runs the Vaultwarden server (a lightweight Bitwarden-compatible backend).</p>




<ul class="wp-block-list">
	<li>Disables user signups (SIGNUPS_ALLOWED: &#8222;false&#8220;).</li>
	<li>Configures SMTP settings for sending emails (e.g. for password resets).</li>
	<li>Sets a secure admin token (using Argon2 hash) to access the /admin interface.</li>
	<li>Persists Vaultwarden data to ./vw-data on the host.<br></li>
</ul>



<p><strong>Enable admin page access:</strong></p>




<p>With the ADMIN_TOKEN set we enable login to the admin site via &lt;yourDomain&gt;/admin. First create a secure admin password. The &lt;yourAdminToken&gt; value can be created with your admin password piped into argon2. The result is a hash value that must be a little modified and then inserted as &lt;yourAdminToken&gt;. It is important that you use single quotes in docker-compose.yml when you insert &lt;yourAdminToken&gt;. </p>




<pre class="wp-block-code"><code>sudo apt install -y argon2
echo -n '&lt;yourAdminPassword&gt;' | argon2 somesalt -e

#This is the result of the argon2 hashing
$argon2i$v=19$m=4096,t=3,p=1$c29tZXNhbHQ$D...
</code></pre>



<p>Then you modify the hash by adding a $ sign in front of each $ sign in the hash. In this case we add 5 $ signs.</p>




<pre class="wp-block-code"><code>#original value
$argon2i$v=19$m=4096,t=3,p=1$c29thbHQ$D...

#modified value
$$argon2i$$v=19$$m=4096,t=3,p=1$$c29thbHQ$$D...
</code></pre>



<p>Then you put the modified value in single quotes into docker-compose.yml.</p>




<pre class="wp-block-code"><code>#docker-compose.yml

.....

ADMIN_TOKEN: '$$argon2i$$v=19$$m=4096,t=3,p=1$$c29thbHQ$$D...'

</code></pre>



<p>Now you can access the admin page with &lt;yourDomain&gt;/admin and login with your admin password. </p>




<p><strong>caddy service:</strong></p>




<p>Uses Caddy web server to reverse proxy to Vaultwarden.</p>




<ul class="wp-block-list">
	<li>Handles HTTPS using custom certificates from ./certs.</li>
	<li>Binds to port 443 for secure access.</li>
	<li>Reads its configuration from ./Caddyfile.</li>
	<li>Logs access to /data/access.log (mapped from ./caddy-data on the host).<br></li>
</ul>



<h4 class="wp-block-heading">Create Caddyfile</h4>



<p>Caddy is a modern, powerful web server that automatically handles HTTPS, reverse proxying, and more. Caddy acts as a secure HTTPS reverse proxy, forwarding external requests to the Vaultwarden Docker container running on internal port 80.</p>




<p>This Caddyfile defines how Caddy should serve and protect your vaultwarden instance over HTTPS.</p>




<pre class="wp-block-code"><code>#Caddyfile
https://&lt;domain&gt; {
  log {
    level INFO
    output file /data/access.log {
      roll_size 10MB
      roll_keep 10
    }
  }

  # Use custom certificate and key
  tls /certs/fullchain.pem /certs/privkey.pem

  # This setting may have compatibility issues with some browsers
  # (e.g., attachment downloading on Firefox). Try disabling this
  # if you encounter issues.
  encode zstd gzip

  # Admin path matcher
  @adminPath path /admin*
  
  # Basic Auth for admin access
  handle @adminPath {
    # If admin path require basic auth
    basicauth {
      superadmin &lt;passwdhash&gt;
    }

    reverse_proxy vaultwarden:80 {
      header_up X-Real-IP {remote_host}
    }
  }

  # Everything else
  reverse_proxy vaultwarden:80 {
    header_up X-Real-IP {remote_host}
  }
}
</code></pre>



<p><strong>Domain</strong></p>




<pre class="wp-block-code"><code>https://&lt;domain&gt;
</code></pre>



<p>This defines the domain name Caddy listens on (e.g. https://yourinstance.example.com).</p>




<p><strong>Logging</strong></p>




<pre class="wp-block-code"><code>log {
  level INFO
  output file /data/access.log {
    roll_size 10MB
    roll_keep 10
  }
}
</code></pre>



<p>Logs all access to a file inside the container (/data/access.log), with log rotation.</p>




<p><strong>TLS with Custom Certificates</strong></p>




<pre class="wp-block-code"><code>tls /certs/fullchain.pem /certs/privkey.pem
</code></pre>



<p>Use your own Let&#8217;s Encrypt certificates from mounted files rather than auto-generating them.</p>




<p><strong>Compression</strong></p>




<pre class="wp-block-code"><code>encode zstd gzip
</code></pre>



<p>Enables modern compression methods to improve performance, though may cause issues with attachments on some browsers.</p>




<p><strong>Admin Area Protection</strong></p>




<pre class="wp-block-code"><code>@adminPath path /admin*
</code></pre>



<p>Matches all requests to /admin paths.</p>




<pre class="wp-block-code"><code>handle @adminPath {
  basicauth {
    superadmin &lt;passwdhash&gt;
  }

  reverse_proxy vaultwarden:80 {
    header_up X-Real-IP {remote_host}
  }
}
</code></pre>



<ul class="wp-block-list">
	<li>Requires HTTP Basic Auth for access to /admin.</li>
	<li>Proxies/Forwards authenticated admin requests to the Vaultwarden container.</li>
	<li>Ensures the backend sees the original client IP address.<br></li>
</ul>



<p><strong>All Other Requests</strong></p>




<pre class="wp-block-code"><code>reverse_proxy vaultwarden:80 {
  header_up X-Real-IP {remote_host}
}
</code></pre>



<ul class="wp-block-list">
	<li>Proxies/Forwards all non-/admin traffic directly to Vaultwarden container.</li>
	<li>Ensures the backend sees the original client IP address.<br></li>
</ul>



<p><strong>Protect your admin page</strong></p>




<p>To protect your admin page we can use a HTTP Basic Auth. This mean whenever you access &lt;yourDomain&gt;/admin a login window will pop up in your browser and ask you to provide a user name and a password. We use htpasswd which is part of the apache2-utils. </p>




<pre class="wp-block-code"><code>#install apache2-utils if not available
sudo apt install apache2-utils

#Create a hash for the user admin (you can use any user-name) 
htpasswd -nB admin

New password: 
Re-type new password: 
admin:$2y$05$HZukVJWhWMrT7qMO2n65bm/5JYlt5tO...

</code></pre>



<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Option
			</th>
			<th>
				Description
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>-n</code>
			</td>
			<td>
				Displays the result only on the console instead of writing it to a file.
			</td>
		</tr>
		<tr>
			<td>
				<code>-B</code>
			</td>
			<td>
				Uses the bcrypt hash algorithm, which is supported by Caddy and is very secure.
			</td>
		</tr>
		<tr>
			<td>
				<code>admin</code>
			</td>
			<td>
				The username for basic auth access (for example admin).
			</td>
		</tr>
	</tbody>
</table>
<figcaption>htpasswd options</figcaption>
</figure>



<p>Then you insert only the hash (without admin: ….) into the Caddyfile.</p>




<pre class="wp-block-code"><code>#Caddyfile

.....

handle @adminPath 
bash
  basicauth {
    admin $2y$05$HZukVJWhWMrT7qMO2n65bm/5JYlt5tO...
  }

</code></pre>



<h4 class="wp-block-heading">Create ssl certificates with Letsencrypt and certbot</h4>



<p>You can follow the instructions in my post on <a href="https://digitaldocblog.com/webserver/ssl-certificates-with-lets-encrypt-and-certbot-on-a-linux-server/" title="SSL Certificates with Letsencrypt">digitaldocblog.com</a>. When you followed these instructions your ssl certificates are installed on your server in standalone mode. Whenever you renew your certificates certbot initiates the domain validation challenge and a temporary server will be started trying to listen on port 80. Because we run a web server this port is blocked and the web server must be stoped before we can initiate the renewal process.</p>




<p>To avoid this we should change the certbot renewal from standalone mode to webroot. Webroot is a method that leverages your existing web server to handle the domain validation challenge process without the need to stop the web server.  Certbot places a temporary file with a unique token in a specific directory within your web servers  public &#8222;webroot&#8220; directory (e.g., /var/www/html/.well-known/acme-challenge/). Let&#8217;s Encrypt then sends a request to your domain to retrieve this file. Since your web server is already running, it can serve the file without any interruption to your website&#8217;s availability.</p>




<p>In our <em>docker.compose.yml</em> we defined the backend service <em>vaultwarden</em>. This service is the backend service listening to port 80. We also defined a web server service <em>caddy</em> listening to port 443.   </p>




<p>In our <em>Caddyfile</em> we specified only a web server only for <em>https://&lt;Domain&gt;</em> working as a reverse proxy. Any request for <em>https://&lt;Domain&gt;</em> will be forwarded to the backend service <em>vaultwarden</em> listening on port 80 <em>vaultwarden:80</em>. </p>




<p>To enable webroot for certbot certificate renewal we must change the  <em>docker.compose.yml</em> file. For the <em>caddy</em> service and in the ports section we must allow port 80 and expose a webroot directory via port 80 only for serving the domain challenge validation file. Therefore we create on the local host directory <em>/opt/vaultwarden/caddy<em>acme</em></em>. In the volumes section of the caddy service we map this local directory 1:1 into the caddy container. Note: the notation with the dot like <em>./caddy<em>acme:/caddy</em>acme</em> requires that the actual <em>Caddyfile</em> is in the same directory <em>/opt/vaultwarden/Caddyfile</em> than <em>/opt/vaultwarden/caddy<em>acme</em></em>. </p>




<pre class="wp-block-code"><code>#docker-compose.yml

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    environment:
      DOMAIN: "https://bitwarden.rottlaender.eu"
      SIGNUPS_ALLOWED: "false"
      SMTP_HOST: "smtp.strato.de"
      SMTP_FROM: "bw@bitwarden.rottlaender.eu"
      SMTP_FROM_NAME: "Vaultwarden"
      SMTP_USERNAME: "bw@bitwarden.rottlaender.eu"
      SMTP_PASSWORD: "ZZqrmQEvydkxY3E2u8Y.KEbKNwgkTV"
      SMTP_SECURITY: "force_tls"
      SMTP_PORT: "465"
      ADMIN_TOKEN: '$$argon2i$$v=19$$m=4096,t=3,p=1$$c29tZXNhbHQ$$D/yu7vPhcpPz8Kk7G/R34YSO+NgtLzai0wVGSGL0RDE'
    volumes:
      - ./vw-data:/data

  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    ports:
      - 80:80
      - 80:80/udp
      - 443:443
      - 443:443/udp
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy-config:/config
      - ./caddy-data:/data
      - ./certs:/certs:ro
      - ./caddy_acme:/caddy_acme
    environment:
      DOMAIN: "https://bitwarden.rottlaender.eu"
      LOG_FILE: "/data/access.log"
</code></pre>



<p>With this configuration we configure <em>caddy</em> to listen on port 80 and on port 443. In the current ufo firewall configuration we only allow the ports 22 and 443.</p>




<pre class="wp-block-code"><code>#Check the firewall status
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (OpenSSH)           ALLOW IN    Anywhere                  
443                        ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))      ALLOW IN    Anywhere (v6)             
443 (v6)                   ALLOW IN    Anywhere (v6)
</code></pre>



<p>We must open port 80 to enable the certbot webroot certificate renewal.</p>




<pre class="wp-block-code"><code># open port 80
sudo ufw allow 80/tcp

# Check the firewall status
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (OpenSSH)           ALLOW IN    Anywhere                  
443                        ALLOW IN    Anywhere                  
80/tcp                     ALLOW IN    Anywhere                  
22/tcp (OpenSSH (v6))      ALLOW IN    Anywhere (v6)             
443 (v6)                   ALLOW IN    Anywhere (v6)             
80/tcp (v6)                ALLOW IN    Anywhere (v6)
</code></pre>



<p>Then we must also change the <em>Caddyfile</em>. We must insert a <em>http://..</em> block to enable <em>caddy</em> to serve the ACME challenge.</p>




<pre class="wp-block-code"><code>#Caddyfile

http://bitwarden.rottlaender.eu {
    # Serve ACME Challenge at Port 80
    handle_path /.well-known/acme-challenge/* {
        root * /caddy_acme
        file_server
    }

    # Any other request redirect to HTTPS
    @notACME not path /.well-known/acme-challenge/*
    handle @notACME {
        redir https://bitwarden.rottlaender.eu{uri} permanent
    }
}

https://bitwarden.rottlaender.eu {
  log {
    level INFO
    output file /data/access.log {
      roll_size 10MB
      roll_keep 10
    }
  }

  # Use custom certificate and key
  tls /certs/fullchain.pem /certs/privkey.pem

  # ACME Challenge for Certbot Webroot
  handle_path /.well-known/acme-challenge/* {
    root * /caddy_acme
    file_server
  }

  # This setting may have compatibility issues with some browsers (e.g., attachment downloading on Firefox). Try disabling this if you encounter issues.
  encode zstd gzip

  # Admin path matcher
  @adminPath path /admin*
  
  # Basic Auth for admin access
  handle @adminPath {
    # If admin path require basic auth
    basicauth {
      superadmin $2y$05$HZukVJWhWMrT7qMOIenLkuf2n65bm/6n260TPXKb4Wn825JYlt5tO  # Password Hash
    }

    reverse_proxy vaultwarden:80 {
      header_up X-Real-IP {remote_host}
    }
  }

  # Everything else
  reverse_proxy vaultwarden:80 {
    header_up X-Real-IP {remote_host}
  }
}
</code></pre>



<p>Then we must also change the <em>/etc/letsencrypt/renewal/bitwarden.rottlaender.eu.conf</em> to configure certbot to use webroot instead of standalone. Here change the <em>authenticator</em> directive to <em>webroot</em> and we add the <em>webroot<em>path</em></em>.  Everything else keep the same. </p>




<pre class="wp-block-code"><code># renew_before_expiry = 30 days
version = 1.21.0
archive_dir = /etc/letsencrypt/archive/bitwarden.rottlaender.eu
cert = /etc/letsencrypt/live/bitwarden.rottlaender.eu/cert.pem
privkey = /etc/letsencrypt/live/bitwarden.rottlaender.eu/privkey.pem
chain = /etc/letsencrypt/live/bitwarden.rottlaender.eu/chain.pem
fullchain = /etc/letsencrypt/live/bitwarden.rottlaender.eu/fullchain.pem

# Options used in the renewal process
[renewalparams]
account = 7844ed1ad487ff139e3adaa80aa7bbab
authenticator = webroot
webroot_path = /opt/vaultwarden/caddy_acme
server = https://acme-v02.api.letsencrypt.org/directory
renew_hook = /usr/local/bin/sync-certs.sh

</code></pre>



<p>And finally we must change the entry in the <em>crontab</em>. Before the <em>depoly-hook</em> we change the code to <em> &#8211;webroot -w /opt/vaultwarden/caddy<em>acme</em></em>. This renewal will be executed daily at 03:30h. In case the certificates are still valid (no renewal required) then the <em>deploy-hook</em> will be skipped. Only in case a renewal must be executed the script <em> /usr/local/bin/sync-certs.sh</em> will be executed.</p>




<pre class="wp-block-code"><code>#crontab

30 3 * * * sh -c '/usr/bin/certbot renew --webroot -w /opt/vaultwarden/caddy_acme --deploy-hook /usr/local/bin/sync-certs.sh &gt;&gt; /var/log/certbot-renew.log 2&gt;&amp;1'
</code></pre>



<h4 class="wp-block-heading">Create sync-certs.sh script and root crontab</h4>



<p>This script copies renewed Let&#8217;s Encrypt certificates from the standard location to a custom destination /opt/vaultwarden/certs, sets strict permissions, assigns correct ownership, and restarts the Caddy container to apply the new certificates. At the end it reloads the Caddy container (via docker-compose restart) to apply new certs.</p>




<pre class="wp-block-code"><code>#!/bin/bash

# Variables
DOMAIN="&lt;Domain&gt;"
SRC="/etc/letsencrypt/live/$DOMAIN"
DEST="/opt/vaultwarden/certs"

# Check Source Directory
if [ ! -d "$SRC" ]; then
    echo "Certificate Path $SRC not found"
    exit 1
fi

# Check Destination Directory
if [ ! -d "$DEST" ]; then
    mkdir -p "$DEST"
    chown vaultwarden:vaultwarden "$DEST"
    chmod 700 "$DEST"
    echo "Target Path $DEST created"
fi

# Copy files (overwrite)
cp "$SRC/fullchain.pem" "$DEST/fullchain.pem"
cp "$SRC/privkey.pem" "$DEST/privkey.pem"

# set owner:group vaultwarden
chown vaultwarden:vaultwarden "$DEST/fullchain.pem"
chown vaultwarden:vaultwarden "$DEST/privkey.pem"

# set access (read write only vaultwarden)
chmod 600 "$DEST/privkey.pem"
chmod 600 "$DEST/fullchain.pem"

echo "[sync-certs] Certificates for $DOMAIN synced"

# successful sync of certificates – caddy re-start
echo "[sync-certs] re-start caddy ..."
cd /opt/vaultwarden
/usr/local/bin/docker-compose restart caddy

echo "[sync-certs] caddy reloaded new certificates"
</code></pre>



<p><strong>Check Source Directory</strong></p>




<pre class="wp-block-code"><code>if [ ! -d "$SRC" ]; then
    echo "Certificate Path $SRC not found"
    exit 1
fi
</code></pre>



<ul class="wp-block-list">
	<li>What it checks: Whether the Let&#8217;s Encrypt certificate source directory for the domain exists.</li>
	<li>Why it&#8217;s needed:<br><ul>
			<li>Let’s Encrypt stores certificates as symlinks in /etc/letsencrypt/live/&lt;domain&gt;.</li>
			<li>If this folder doesn&#8217;t exist, the script stops immediately to avoid copying from a missing or invalid source.</li>
		</ul></li>
	<li>Fail-safe: Prevents copying non-existent files, which would cause later commands to fail.<br></li>
</ul>



<p><strong>Check and Create Destination Directory</strong></p>




<pre class="wp-block-code"><code>if [ ! -d "$DEST" ]; then
    mkdir -p "$DEST"
    chown vaultwarden:vaultwarden "$DEST"
    chmod 700 "$DEST"
    echo "Target Path $DEST created"
fi
</code></pre>



<ul class="wp-block-list">
	<li>What it checks: Whether the destination directory for the copied certificates exists.</li>
	<li>If not:<br><ul>
			<li>It creates the directory (mkdir -p ensures parent paths are created if missing).</li>
			<li>Sets secure permissions:<br><ul>
					<li>Owner: vaultwarden</li>
					<li>Permissions: 700 – only the vaultwarden user can access the directory.</li>
				</ul></li>
		</ul></li>
	<li>Why this matters:<br><ul>
			<li>The vaultwarden container or process needs access to the certificates.</li>
			<li>These permissions ensure only vaultwarden can read the certs, improving security.<br></li>
		</ul></li>
</ul>



<h4 class="wp-block-heading">Start, Stop and Check containers</h4>



<p>Here are the most important docker-compose commands. Run these commands when you are in the directory where the docker-compose.yml file is and be sure that the logged in user is in the docker group (otherwise these commands work with sudo).  </p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Command
			</th>
			<th>
				Description
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>docker-compose up -d</code>
			</td>
			<td>
				Start all services defined in <code>docker-compose.yml</code> in detached mode (background).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose down</code>
			</td>
			<td>
				Stop and remove all services and associated networks/volumes (defined in the file).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose restart</code>
			</td>
			<td>
				Restart all services.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose stop</code>
			</td>
			<td>
				Stop all running services (without removing them).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose start</code>
			</td>
			<td>
				Start services that were previously stopped.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose logs</code>
			</td>
			<td>
				Show logs from all services.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker-compose logs -f</code>
			</td>
			<td>
				Tail (follow) logs in real time.
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Docker-compose commands to start, stop and check</figcaption>
</figure>



<p>Here art the most important docker commands to check the status of containers. </p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Command
			</th>
			<th>
				Description
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>docker ps</code>
			</td>
			<td>
				List <strong>running</strong> containers.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker ps -a</code>
			</td>
			<td>
				List <strong>all</strong> containers (running + stopped).
			</td>
		</tr>
		<tr>
			<td>
				<code>docker logs &lt;container-name&gt;</code>
			</td>
			<td>
				Show logs of a specific container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker logs -f &lt;container-name&gt;</code>
			</td>
			<td>
				Tail logs in real time.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker inspect &lt;container-name&gt;</code>
			</td>
			<td>
				Show detailed info about a container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker top &lt;container-name&gt;</code>
			</td>
			<td>
				Show running processes inside the container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker exec -it &lt;container-name&gt; /bin/sh</code>
			</td>
			<td>
				Start a shell session in the container.
			</td>
		</tr>
		<tr>
			<td>
				<code>docker stats</code>
			</td>
			<td>
				Live resource usage (CPU, RAM, etc.) of containers.
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Docker commands to check containers</figcaption>
</figure>



<h4 class="wp-block-heading">Backup the vaultwarden data</h4>



<p>To backup your database you must login to your remote server and navigate to the directory <em>/opt/vaulwarden/vw-data</em>. </p>




<pre class="wp-block-code"><code>cd /opt/vaultwarden/vw-data
ls -l /opt/vaultwarden/vw-data

drwxr-xr-x 2 root root    4096 Mai 14 07:29 attachments
-rw-r--r-- 1 root root 1437696 Mai 29 10:03 db_20250529_080308.sqlite3
-rw-r--r-- 1 root root 1437696 Mai 29 10:04 db_20250529_080448.sqlite3
-rw-r--r-- 1 root root 1445888 Aug  9 08:31 db_20250809_063113.sqlite3
-rw-r--r-- 1 root root 1552384 Aug 10 07:26 db.sqlite3
-rw-r--r-- 1 root root   32768 Sep  3 08:27 db.sqlite3-shm
-rw-r--r-- 1 root root  135992 Sep  3 08:27 db.sqlite3-wal
drwxr-xr-x 2 root root   16384 Sep  3 07:57 icon_cache
-rw-r--r-- 1 root root    1675 Mai 14 07:29 rsa_key.pem
drwxr-xr-x 2 root root    4096 Mai 14 07:29 sends
drwxr-xr-x 2 root root    4096 Mai 14 07:29 tmp

</code></pre>



<p>Before we run the backup here are some background infos.</p>




<p><strong>Note:</strong> Pls. Be aware that you are currently navigating within your <em>remote server</em> not within the container vaultwarden. Under the service <em>vaultwarden</em> and <em>volumes</em> we defined in <em>docker-compose.yml</em> that the directory <em>./vw-data</em> should be mounted into the container under the directory of <em>/data</em>.  This mean that these directories are synchronized and you can check this by navigating within the container as follows.</p>




<p>To navigate within the container <em>vaultwarden</em> you can run the following commands. </p>




<pre class="wp-block-code"><code>#list your containers home directory
docker exec vaultwarden ls -l /

#list the data directory within your container
docker exec vaultwarden ls -l /data
</code></pre>



<p>With <em> docker exec vaultwarden ls -l /</em> you list your container root directory. Here you find among others the file <em>vaultwarden</em> which is basically the main program.  With <em>docker exec vaultwarden ls -l /data</em> you list the files in <em>/data</em> on container side. You see exact the same files than in <em>./vw-data</em> on your remote server.</p>




<p>All data of your vaultwarden service are stored in <em>/data</em> on container side. Here you have the database <em> db.sqlite3</em> and various directories and other files. </p>




<p>To backup these data you run the program <em>vaultwarden</em> with the option <em>backup</em>. </p>




<pre class="wp-block-code"><code>#interactive mode
docker exec -it vaultwarden /vaultwarden backup

#standard mode
docker exec vaultwarden /vaultwarden backup
Backup to 'data/db_20250905_045937.sqlite3' was successful

</code></pre>



<p>The option <em>exec -it</em> is not necessary. It runs the command in interactive mode within the terminal. </p>




<p>This script:</p>




<ol class="wp-block-list">
	<li>Reads the database location and configurations on container side in <em>/data</em>.</li>
	<li>Creates a backup of the SQLite database db.sqlite3, attachments, icons, etc.</li>
	<li>Compresses everything as a .tar.gz archive.</li>
	<li>Stores the archive in the <em>/data</em> directory on container side. <br></li>
</ol>



<p>After executing this command the backup will be also be present in <em>./vw-data</em> on your remote server. </p>




<p>Then you can download the backup from your remote server to your local computer. <strong>Note:</strong> Your local computer is in my case a Mac where I sit in front of and from where I connect to my remote server via <em>ssh</em>.</p>




<p>To download the backup you navigate on your local computer to the directory where you want to store your backup files. Then you run the <em>scp</em> command with the <em>.</em> at the end and download the backup files from your remote server into the current directory which should be your backup directory on your local computer. </p>




<pre class="wp-block-code"><code>#on your local computer
cd /Users/patrick/Software/docker/vaultwarden/backup

scp user@remote-host:/opt/vaultwarden/vw-data/&lt;backupfile&gt; .

</code></pre>



]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SSL Certificates with Lets Encrypt and certbot on a Linux Server</title>
		<link>https://digitaldocblog.com/webserver/ssl-certificates-with-lets-encrypt-and-certbot-on-a-linux-server/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 23 May 2025 12:29:11 +0000</pubDate>
				<category><![CDATA[Server]]></category>
		<category><![CDATA[Webserver]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=241</guid>

					<description><![CDATA[To install our certificates for SSL we use a tool called certbot. We install certbot with apt on our Linux machine. Then we run certbot. The Letsencrypt certificates are automatically&#8230;]]></description>
										<content:encoded><![CDATA[
<p>To install our certificates for SSL we use a tool called <em>certbot</em>. We install <em>certbot</em> with apt on our Linux machine.</p>




<pre class="wp-block-code"><code>sudo apt update
sudo apt install certbot
</code></pre>



<p>Then we run certbot. </p>




<pre class="wp-block-code"><code>sudo certbot certonly --standalone -d &lt;yourDomain&gt;

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Enter email address (used for urgent renewal and security notices)
 (Enter 'c' to cancel): &lt;yourEmail&gt;

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.5-February-24-2025.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
Account registered.
Requesting a certificate for &lt;yourDomain&gt;

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/&lt;yourDomain&gt;/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/&lt;yourDomain&gt;/privkey.pem
This certificate expires on 2025-07-25.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
 * Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 * Donating to EFF:                    https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
</code></pre>



<p>The Letsencrypt certificates are automatically stored by certbot in the directory /etc/letsencrypt. Pls. Check the permissions accordingly. </p>




<pre class="wp-block-code"><code>#700 for letsencrypt and owned by root
sudo ls -l /etc
drwx------ 9 root  root    4096 Mai 22 04:49 letsencrypt

#750 for archive and live and owned by root 
ls -l /etc/letsencrypt
drwx------ 3 root root 4096 Apr 26 10:16 accounts
drwxr-x--- 3 root root 4096 Apr 26 10:18 archive
-rw-r--r-- 1 root root  207 Nov 12  2021 cli.ini
drwx------ 2 root root 4096 Mai 16 07:15 csr
drwx------ 2 root root 4096 Mai 16 07:15 keys
drwxr-x--- 3 root root 4096 Apr 26 10:18 live
drwxr-x--- 2 root root 4096 Mai 16 07:15 renewal
drwxr-xr-x 5 root root 4096 Apr 26 10:16 renewal-hooks

#700 for the directory with the real certificates 
sudo ls -l /etc/letsencrypt/archive
drwx------ 2 root root 4096 Mai 16 07:15 &lt;domain&gt;

#750 for the directory with the symlinks
sudo ls -l /etc/letsencrypt/live
drwxr-x--- 2 root root 4096 Mai 16 07:15 &lt;domain&gt;
</code></pre>



<p><strong>note:</strong> Permissions in Linux are set using the following scheme. </p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Owner
			</th>
			<th>
				Group
			</th>
			<th>
				Others
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				Read (r) = 4 &#8211; Write (w) = 2 &#8211; execute (x) = 1
			</td>
			<td>
				Read (r) = 4 &#8211; Write (w) = 2 &#8211; execute (x) = 1
			</td>
			<td>
				Read (r) = 4 &#8211; Write (w) = 2 &#8211; execute (x) = 1
			</td>
		</tr>
		<tr>
			<td>
				<strong>Example:</strong>
			</td>
			<td>
				
			</td>
			<td>
				
			</td>
		</tr>
		<tr>
			<td>
				7 = r(4)+w(2)+x(1)
			</td>
			<td>
				5 = r(4)+x(1)
			</td>
			<td>
				0
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Permissions in Linux</figcaption>
</figure>



<p>The following permissions are recommended:</p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Directory
			</th>
			<th>
				Permissions
			</th>
			<th>
				Recommendation
			</th>
			<th>
				Comment
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>accounts/</code>
			</td>
			<td>
				<code>drwx------</code>
			</td>
			<td>
				750. Ok
			</td>
			<td>
				Contains Let&#8217;s Encrypt account information – <strong>only root is allowed access</strong>
			</td>
		</tr>
		<tr>
			<td>
				<code>archive/</code>
			</td>
			<td>
				<code>drwxr-x---</code>
			</td>
			<td>
				750 or 700
			</td>
			<td>
				Contains <strong>real keys/certificates</strong> – <strong>do not make publicly readable</strong>
			</td>
		</tr>
		<tr>
			<td>
				<code>cli.ini</code>
			</td>
			<td>
				<code>-rw-r--r--</code>
			</td>
			<td>
				644 or 600 if sensitive content
			</td>
			<td>
				Not critical unless cli.ini contains API keys or email addresses
			</td>
		</tr>
		<tr>
			<td>
				<code>csr/</code>
			</td>
			<td>
				<code>drwx------</code>
			</td>
			<td>
				700. Ok
			</td>
			<td>
				CSR files (may contain sensitive information) – not readable by others
			</td>
		</tr>
		<tr>
			<td>
				<code>keys/</code>
			</td>
			<td>
				<code>drwx------</code>
			</td>
			<td>
				700. Ok
			</td>
			<td>
				Contains private keys &#8211; <strong>only root is allowed access</strong>
			</td>
		</tr>
		<tr>
			<td>
				<code>live/</code>
			</td>
			<td>
				<code>drwxr-x---</code>
			</td>
			<td>
				750. Ok
			</td>
			<td>
				Only root + webserver group should have access to symlinks
			</td>
		</tr>
		<tr>
			<td>
				<code>renewal/</code>
			</td>
			<td>
				<code>drwxr-x---</code>
			</td>
			<td>
				750 or 700
			</td>
			<td>
				Contains configs with paths to key files – do not leave open
			</td>
		</tr>
		<tr>
			<td>
				<code>renewal-hooks/</code>
			</td>
			<td>
				<code>drwxr-xr-x</code>
			</td>
			<td>
				755. Ok
			</td>
			<td>
				Contains mostly harmless scripts – leaving it readable is okay unless you have sensitive hooks
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Lets Encrypt Permissions</figcaption>
</figure>



<p>The keys are set up the following structure: </p>




<ul class="wp-block-list">
	<li>In /etc/letsencrypt/live/&lt;domain&gt; there are only symlinks that point to the real key files</li>
	<li>the real files are stored in /etc/letsencrypt/archive/&lt;domain&gt;<br></li>
</ul>



<p>A symlink itself in /etc/letsencrypt/live/&lt;domain&gt; has no real permissions. It effectively inherits the access rights of the file it points to. However: ls -l shows symbolic permissions which, for symlinks, are usually lrwxrwxrwx. This isn&#8217;t problematic in itself, since access is always controlled by the target files.</p>




<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/live/&lt;domain&gt;
total 4
lrwxrwxrwx 1 root root  48 Mai 16 07:15 cert.pem -&gt; ../../
lrwxrwxrwx 1 root root  49 Mai 16 07:15 chain.pem -&gt; ../../
lrwxrwxrwx 1 root root  53 Mai 16 07:15 fullchain.pem -&gt; ../../
lrwxrwxrwx 1 root root  51 Mai 16 07:15 privkey.pem -&gt; ../../
</code></pre>



<p>Important are the permissions of the actual certificate files in the directory /etc/letsencrypt/archive/&lt;domain&gt;. The permissions should be set as follows.</p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				File
			</th>
			<th>
				Required permissions
			</th>
			<th>
				Owner
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>privkey*.pem</code>
			</td>
			<td>
				<code>600</code>
			</td>
			<td>
				<code>root:root</code>
			</td>
		</tr>
		<tr>
			<td>
				<code>cert*.pem</code>
			</td>
			<td>
				<code>644</code>
			</td>
			<td>
				<code>root:root</code>
			</td>
		</tr>
		<tr>
			<td>
				<code>chain*.pem</code>
			</td>
			<td>
				<code>644</code>
			</td>
			<td>
				<code>root:root</code>
			</td>
		</tr>
		<tr>
			<td>
				<code>fullchain*.pem</code>
			</td>
			<td>
				<code>644</code>
			</td>
			<td>
				<code>root:root</code>
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Permissions on the actual certificate and key files</figcaption>
</figure>



<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/archive/&lt;domain&gt;
total 32
-rw-r--r-- 1 root root 1858 Apr 26 10:18 cert1.pem
-rw-r--r-- 1 root root 1801 Apr 26 10:18 chain1.pem
-rw-r--r-- 1 root root 3659 Apr 26 10:18 fullchain1.pem
-rw------- 1 root root 1704 Apr 26 10:18 privkey1.pem
</code></pre>



<p>You should pay special attention to some other files.</p>




<p><strong>/etc/letsencrypt/accounts/acme-&lt;v02.api&gt;/directory/&lt;accountID&gt;</strong></p>




<p>The files meta.json, regr.json and private_key.json in the accounts directory should be owned by root and only root should have access. These files contain your Let&#8217;s Encrypt account credentials, especially the private account key, and are therefore highly sensitive.</p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				Files
			</th>
			<th>
				Recommendation
			</th>
			<th>
				Comment
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				<code>private_key.json</code>
			</td>
			<td>
				400
			</td>
			<td>
				contains private account key
			</td>
		</tr>
		<tr>
			<td>
				<code>meta.json</code>, <code>regr.json</code>
			</td>
			<td>
				600
			</td>
			<td>
				contains metadata, optionally sensitive information (email, ID)
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Restricted Permissions for files in the account directory</figcaption>
</figure>



<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/accounts/acme-&lt;v02.api&gt;/directory/&lt;accountID&gt;
total 12
-rw------- 1 root root   85 Apr 26 10:18 meta.json
-r-------- 1 root root 1632 Apr 26 10:17 private_key.json
-rw------- 1 root root   80 Apr 26 10:17 regr.json
</code></pre>



<p><strong>private_key.json:</strong> Contains your ACME account key – this is like a password for your certificate management.</p>




<ul class="wp-block-list">
	<li>Read-only: The file must be readable by root so that Let&#8217;s Encrypt (or other tools like certbot) can communicate with this key to renew or issue certificates.</li>
	<li>Write-protected: Making the file unreadable even by root prevents it from being modified accidentally or through an unsafe operation. The private key should not be changed, as this could prevent access to your Let&#8217;s Encrypt account.</li>
</ul>



<p><strong>meta.json:</strong> Contains information such as the registered email address.</p>




<p><strong>regr.json:</strong> Let&#8217;s Encrypt registration details.</p>




<p>A leak of these files would allow attackers to:</p>




<ul class="wp-block-list">
	<li>Hijack your Let&#8217;s Encrypt account</li>
	<li>Issue certificates for any domain (if DNS access is compromised)</li>
	<li>Tamper or delete existing certificates<br></li>
</ul>



<p><strong> /etc/letsencrypt/csr and /etc/letsencrypt/keys directory</strong></p>




<p>The files in the /etc/letsencrypt/csr and /etc/letsencrypt/keys directories should have the following permissions to ensure security.</p>




<figure class="wp-block-table">
<table>
	<thead>
		<tr>
			<th>
				File Type
			</th>
			<th>
				Path
			</th>
			<th>
				Permission
			</th>
			<th>
				Why
			</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>
				Private Key
			</td>
			<td>
				/etc/letsencrypt/keys/<em>.pem</em>
			</td>
			<td>
				 Must be 600
			</td>
			<td>
				Contains the <strong>private key</strong> for the certificate
			</td>
		</tr>
		<tr>
			<td>
				CSR (Certificate Signing Request)
			</td>
			<td>
				/etc/letsencrypt/csr/<em>.pem</em>
			</td>
			<td>
				Recommended 600 (alternatively 644 less secure)
			</td>
			<td>
				Contains <strong>no private key</strong>, just public key + metadata
			</td>
		</tr>
	</tbody>
</table>
<figcaption>Permissions for csr and key files </figcaption>
</figure>



<p>Directory <strong>/etc/letsencrypt/csr</strong> stores Certificate Signing Requests (CSRs) generated by Certbot during certificate issuance or renewal. These are not used after issuance — they are kept for reference or debugging, not for active use.</p>




<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/csr
total 8
-rw------- 1 root root 936 Apr 26 10:18 0000_csr-certbot.pem
-rw-r--r-- 1 root root 936 Mai 16 07:15 0001_csr-certbot.pem
</code></pre>



<p>CSR files contain requests for certificate issuance and may contain sensitive information, such as the public key associated with a private key. Although the CSR file itself does not contain the private key, it can still pose informational risks because it represents an association with a private key. </p>




<p><strong>Security measure:</strong> Access should be restricted to root to prevent an attacker from tampering with or accessing the CSR files. </p>




<p><strong>Issue:</strong>After a renewal of the certificates on Mai 16 you can see above that the permissions to the file 0001_csr-certbot.pem has been set by certbot to 644. You can correct this with the following command. </p>




<pre class="wp-block-code"><code>sudo find /etc/letsencrypt/csr -name '*.pem' -exec chmod 600 {} +
</code></pre>



<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/csr
total 8
-rw------- 1 root root 936 Apr 26 10:18 0000_csr-certbot.pem
-rw------- 1 root root 936 Mai 16 07:15 0001_csr-certbot.pem
</code></pre>



<p>Directory <strong>/etc/letsencrypt/keys</strong> holds actual private keys used to sign CSRs and serve HTTPS. They must never be world-readable.</p>




<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/keys
total 8
-rw------- 1 root root 1704 Apr 26 10:18 0000_key-certbot.pem
-rw------- 1 root root 1704 Mai 16 07:15 0001_key-certbot.pem
</code></pre>



<p>Private keys are critical for security because they are used for authentication during certificate requests and for encrypting and decrypting data. The private key should be readable and writable only by root to prevent unauthorized access. Important: Any user with access to these key files could perform encryption operations or misuse certificates.</p>




<p><strong> /etc/letsencrypt/renewal/&lt;yourDomain&gt;.conf </strong></p>




<p>The file /etc/letsencrypt/renewal/&lt;yourDomain&gt;.conf contains the configuration for automatic certificate renewal and can contain sensitive data, such as the paths to private keys and certificates. Therefore, it is important to secure this file accordingly. The file should only be readable and writable by root, as it contains potentially sensitive information about the certificate configuration and the paths to private keys.</p>




<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/renewal
total 4
-rw------- 1 root root 606 Mai 16 07:15 &lt;yourDomain&gt;.conf
</code></pre>



]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Containerize a nodejs app and nginx with docker on Ubuntu 22.04</title>
		<link>https://digitaldocblog.com/webserver/containerize-a-nodejs-app-and-nginx-with-docker-on-ubuntu-2204/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 03 May 2025 15:56:04 +0000</pubDate>
				<category><![CDATA[Server]]></category>
		<category><![CDATA[Web-Development]]></category>
		<category><![CDATA[Webserver]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[NginX]]></category>
		<category><![CDATA[Node.js]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=232</guid>

					<description><![CDATA[What we need: We must check if these packages are already installed on our system. Therefore we use the following commands in the terminal and see if there is any&#8230;]]></description>
										<content:encoded><![CDATA[
<p>What we need:</p>



<ol class="wp-block-list">
<li><strong>docker:</strong> Docker is a tool for building, running, and managing containers. A container is a lightweight, isolated environment that packages an application and all its dependencies (Docker application).</li>



<li><strong>docker-compose:</strong> Docker Compose is a tool for defining and running multi-container Docker applications using a single file called docker-compose.yml. </li>



<li><strong>node.js:</strong> Node.js is a JavaScript runtime that lets you run JavaScript outside the browser, typically on the server. It’s built on Chrome’s V8 engine, and it’s great for building fast, scalable network applications, like APIs or web servers.</li>



<li><strong>npm:</strong> npm stands for Node Package Manager. It&#8217;s a tool that comes with Node.js and is used to Install packages (libraries, tools, frameworks), manage project dependencies and share your own packages with others.</li>



<li><strong>curl:</strong> curl is a command-line tool used to send requests to URLs. It lets you interact with APIs or download content from the internet right from your terminal.</li>



<li><strong>gnupg:</strong> GnuPG (or GPG, short for Gnu Privacy Guard) is a tool for encryption and signing data and communications. It uses public-key cryptography to encrypt, decrypt, sign, and verify files or messages.</li>



<li> <strong>ca-certificates:</strong> A collection of trusted root certificates used to validate HTTPS connections.<br></li>
</ol>



<p>We must check if these packages are already installed on our system. Therefore we use the following commands in the terminal and see if there is any output. If there is no output the packages are not installed and we continue with the installation as described below. </p>



<pre class="wp-block-code"><code># check the docker components
docker --version
docker-compose --version

#check the node and npm components
node --version
npm --version

#check if required dependencies are already installed
curl --version
gpg --version
</code></pre>



<p>In case some of these packages are already installed on your system you need to reduce the installation scope of the packages accordingly. </p>



<p>We expect to have no output after typing the above commands and go through the installation process step-by-step. </p>



<p><strong>Step 1:</strong> Install node.js and npm from the standard Ubuntu resources. </p>



<p><strong>Step 2:</strong> </p>



<ul class="wp-block-list">
<li>Prepare the system for secure downloads from Docker resources and </li>



<li>install ca-certificates, curl and gnupg from standard Ubuntu resources</li>
</ul>



<p><strong>Step 3:</strong> Install Docker from Docker resources.</p>



<p><strong>Step 4:</strong> Install docker-compose standalone from Docker resources. </p>



<h3 class="wp-block-heading">Install Node.js and npm from Ubuntu Resources</h3>



<p>Before we start with the installation we update and upgrade all packages. Node.js is available in Ubuntu’s repositories, so you can install it with the following commands.</p>



<pre class="wp-block-code"><code>sudo apt update
sudo apt upgrade

sudo apt install -y nodejs npm
</code></pre>



<p>Verify the installation:</p>



<pre class="wp-block-code"><code>node -v
npm -v
</code></pre>



<h3 class="wp-block-heading">Prepare the System for secure Downloads from Docker</h3>



<p>To prepare our system we ensure that <em>ca-certificates, curl and gnupg</em> are available on our system.</p>



<p>To be able to install the Docker packages from the <em>external</em> or <em>not Ubuntu</em> Docker repository  <em>apt</em> must know where these resources are otherwise <em>apt</em> would install the packages from the Ubuntu repositories which would be the standard but not what we want. Therefore we must add the Docker repository to the <em>apt</em> tool. The complete process can be followed on the <a href="https://docs.docker.com/engine/install/ubuntu/" title="Install Docker Ubuntu">Docker Manual Pages</a>. </p>



<p>When we add the Docker repository, the packages from that repository are digitally signed to ensure that these packages really come from docker. <em>gnupg</em> is a tool that allows our system to check these signatures against a trusted Docker GPG key. Therefore <em>gnupg</em> must be available on our system. </p>



<p>To make sure that the GPG key is available we must download the key from the Docker site. For the download we use the <em>curl</em> tool. Therefore <em>curl</em> must be available on our system.</p>



<p>We access the Docker site via HTTPS. Here <em>ca-certificates</em> comes into play. <em>ca-certificates</em> is a collection of trusted root certificates used to validate HTTPS connections. When downloading the Docker GPG key or accessing the Docker <em>apt</em> repository via HTTPS, Ubuntu checks the site’s SSL certificate against the collection of trusted root certificates. Therefore <em> ca-certificates</em> must be available on our system.</p>



<p>To check if <em> ca-certificates</em> is already installed  we run the following command:</p>



<pre class="wp-block-code"><code>dpkg -l | grep ca-certificates
</code></pre>



<p><strong>Note:</strong> The <em>dpkg</em> command stands for Debian Package Manager and is used to manage <em>.deb</em> packages on Debian-based systems like Ubuntu. It works at a lower level than <em>apt</em>, which is a higher-level package management tool that uses <em>dpkg</em> under the hood.</p>



<p>If it’s installed, you’ll see output like this:</p>



<pre class="wp-block-code"><code>ii ca-certificates 20230311ubuntu0.22.04.1 all Common CA..
</code></pre>



<p>In this case you do not need to install  <em>ca-certificates</em>  as describes below  but it is highly recommended to update the collection of trusted root certificates before you continue.</p>



<pre class="wp-block-code"><code>sudo update-ca-certificates
</code></pre>



<p>We assume that we must install <em>ca-certificates, curl and gnupg</em>. First, we update the system package list to ensure everything is up-to-date on our system. This checks the installed packages for updates and upgrade the packages if required in one process. Then we install ca-certificates, curl and gnupg.</p>



<pre class="wp-block-code"><code>sudo apt update &amp;&amp; sudo apt upgrade -y

sudo apt install -y ca-certificates curl gnupg
</code></pre>



<p><strong>Add Docker GPG key:</strong></p>



<p>To install the GPG keys from Docker we run the following command.</p>



<pre class="wp-block-code"><code>sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc &gt; /dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>



<p>Let&#8217;s break down the full set of commands step by step:</p>



<p><strong>sudo install -m 0755 -d /etc/apt/keyrings ….</strong></p>



<ul class="wp-block-list">
<li><em>sudo</em> runs the command with superuser (root) privileges.  </li>



<li><em>install</em> is used for copying files and setting permissions.</li>
</ul>



<p><strong>note:</strong> This command is only <code>sudo install</code> and not <code>sudo apt install</code> . If we use <em>apt install</em> then this installs software packages from Ubuntu&#8217;s package repositories. Example: <code>sudo apt install docker-ce</code>. It&#8217;s used to install applications.</p>



<p>Only <em>install</em> is a Unix command (part of coreutils) used to create directories, copy files, and set permissions in a single step. Example: <code>sudo install -m 0755 -d /etc/apt/keyrings</code>. It&#8217;s used to prepare the system, not install software. So this command creates the <em>/etc/apt/keyrings</em> folder with secure permissions, which is later used to store GPG keyring files (such as Docker’s signing key).</p>



<ul class="wp-block-list">
<li>-m 0755 sets file permissions:<br>
<ul class="wp-block-list">
<li><em>0755</em> means:</li>



<li>1st (0) is just a numerical representation</li>



<li>2nd (7) is for the Owner (root) having <em>1 x read (4) + 1 x write (2) + 1 x execute (1) = 7</em> permissions.</li>



<li>3rd (5) is for the Group (root) having <em>1 x read (4) + 0 x write (2) + 1 x execute (1) = 5</em> permissions (no write).</li>



<li>4th (5) is for the Others having <em>1 x read (4) + 0 x write (2) + 1 x execute (1) = 5</em> permissions (no write).</li>
</ul>
</li>



<li>-d tells install to create a directory</li>



<li><em>/etc/apt/keyrings</em> is the target directory where the Docker GPG key will be stored.</li>
</ul>



<p>What it does?</p>



<ul class="wp-block-list">
<li>Ensures that the /etc/apt/keyrings directory exists.</li>



<li>Sets the correct permissions (readable but not writable by non-root users).</li>



<li>This is a <em>security best practice</em> to keep GPG keys safe from tampering.<br></li>
</ul>



<p><strong> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc &gt; /dev/null …</strong></p>



<ul class="wp-block-list">
<li><em>curl</em> a command-line tool to fetch files from a URL (we have installed it before)</li>



<li>-fsSL flags to control <em>curl</em> behavior:<br>
<ul class="wp-block-list">
<li>-f (fail silently on server http errors like 404 &#8211; site or resource not found).</li>



<li>-s (silent mode, no progress output).</li>



<li>-S (shows error messages if -s is used).</li>



<li>-L (follows redirects if the URL points elsewhere).</li>
</ul>
</li>



<li><em>https://download.docker.com/linux/ubuntu/gpg</em> the URL for Docker’s GPG key file (the file name on the docker site is <em>gpg</em>).</li>



<li>| (pipe) passes the downloaded data (the <em>gpg</em> file) to another command. In this case the data will be passed to the following <em>sudo</em> command.</li>



<li><em>sudo tee /etc/apt/keyrings/docker.asc</em> writes the key to before created directory <em>/etc/apt/keyrings/docker.asc</em>:<br>
<ul class="wp-block-list">
<li><em>tee</em> writes the output to a file (here it is <em>docker.asc</em>) while also displaying it in the terminal.</li>



<li><em>sudo</em> ensures that the file is written with root permissions.</li>
</ul>
</li>



<li>> /dev/null redirects standard output to <em>/dev/null</em> to suppress unnecessary output. The <em>tee</em> command can also display and write at the same time, unless you silence it with > /dev/null.<br></li>
</ul>



<p><strong>note:</strong> <em>sudo tee…</em> runs with root permissions, so the file can be written even to protected directories such as <em>/etc/apt/keyrings/</em> (we have set the permission to 0755. See above). You can also run the <em>curl</em> command with root permissions (<em>sudo curl …</em>) and then directly pass the output into a file with the <em>-o</em> option: <em>sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc</em>. This is suggested on the <a href="https://docs.docker.com/engine/install/ubuntu/" title="Bocker on Ubuntu">Docker Manaual Page</a> but you can do it in both ways.</p>



<p>What it does?</p>



<ul class="wp-block-list">
<li>Downloads Docker&#8217;s official GPG key.</li>



<li>Saves it securely in /etc/apt/keyrings/docker.asc.</li>



<li>Ensures the key isn’t printed to the terminal.<br></li>
</ul>



<p><strong> sudo chmod a+r /etc/apt/keyrings/docker.asc</strong></p>



<ul class="wp-block-list">
<li><em>sudo</em> runs the command (in this case the <em>chmod</em> command) as root.</li>



<li><em>chmod</em> modifies file permissions.<br>
<ul class="wp-block-list">
<li><em>a+r</em> grants <em>read (r) permission</em> to <em>all users (a)</em>.</li>
</ul>
</li>



<li><em>/etc/apt/keyrings/docker.asc</em> the file whose permissions are being modified.</li>
</ul>



<p>What it does?</p>



<ul class="wp-block-list">
<li>Ensures that all users (including apt processes) can read the GPG key.</li>



<li>This is necessary so that <em>apt</em> can verify Docker package signatures when installing updates.<br></li>
</ul>



<p>Previously, GPG files were stored in <em>/etc/apt/trusted.gpg</em>. This has changed.</p>



<p>Why Is This Necessary?</p>



<ol class="wp-block-list">
<li>Security:  </li>
</ol>



<ul class="wp-block-list">
<li>Storing GPG keys in <em>/etc/apt/keyrings/</em> instead of <em>/etc/apt/trusted.gpg</em> is a best practice.</li>



<li>Prevents malicious modifications to package signatures.</li>
</ul>



<ol class="wp-block-list">
<li>Package Verification:</li>
</ol>



<ul class="wp-block-list">
<li>The GPG key allows Ubuntu’s package manager (apt) to verify that Docker packages are genuine and not tampered with.</li>
</ul>



<ol class="wp-block-list">
<li>Future-proofing:</li>
</ol>



<ul class="wp-block-list">
<li>Newer versions of Ubuntu prefer keys in <em>/etc/apt/keyrings/</em> instead of the older <em>/etc/apt/trusted.gpg</em>.<br></li>
</ul>



<p>Final Summary:</p>



<figure class="wp-block-table"><table><thead><tr><th>
				Command
			</th><th>
				Purpose
			</th></tr></thead><tbody><tr><td>
				sudo install -m 0755 -d /etc/apt/keyring
			</td><td>
				Creates a secure directory for storing package keys.
			</td></tr><tr><td>
				curl -fsSL &#8230; | sudo tee /etc/apt/keyrings/docker.asc &gt; /dev/null
			</td><td>
				Downloads and saves Docker&#8217;s GPG key.
			</td></tr><tr><td>
				sudo chmod a+r /etc/apt/keyrings/docker.asc
			</td><td>
				Ensures the key can be read by apt.
			</td></tr></tbody></table><figcaption class="wp-element-caption">Add Docker GPG key</figcaption></figure>



<p><strong>Add the Docker repository:</strong></p>



<p>To install the docker resources to the <em>apt</em> sources list we run the following command. </p>



<pre class="wp-block-code"><code>echo "deb &#91;arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>



<p>Let&#8217;s break down the command step by step:</p>



<p><strong> echo &#8222;deb….“</strong></p>



<p>The <em>echo</em> command output the text between the <em>“….“</em>. This is the APT repository entry for Docker. Let&#8217;s analyze its components:</p>



<ul class="wp-block-list">
<li>deb → Indicates that this is a Debian-based software repository.</li>



<li>arch=$(dpkg &#8211;print-architecture):<br>
<ul class="wp-block-list">
<li><em>dpkg &#8211;print-architecture</em> dynamically retrieves the system architecture from your system (e.g., amd64, arm64).</li>



<li>This ensures that the correct package version for your system&#8217;s architecture is used.</li>
</ul>
</li>



<li><em>signed-by=/etc/apt/keyrings/docker.asc</em> specifies the location of the docker GPG key <em>docker.asc</em> (we installed it before), which is used to verify the authenticity of packages downloaded from the repository.</li>



<li><em>https://download.docker.com/linux/ubuntu</em> the URL of Docker’s official repository.</li>



<li><em>$(lsb_release -cs)</em> dynamically fetches the codename of the Ubuntu version (e.g., jammy for Ubuntu 22.04).<br>
<ul class="wp-block-list">
<li>This ensures that the correct repository for the current Ubuntu version is used.</li>
</ul>
</li>



<li><em>stable</em> specifies that we are using the stable release channel of Docker.</li>
</ul>



<p><strong> | sudo tee /etc/apt/sources.list.d/docker.list</strong></p>



<ul class="wp-block-list">
<li>The | (pipe) takes the output of echo and passes it to the <em>tee</em> command.</li>



<li><em>sudo tee /etc/apt/sources.list.d/docker.list</em> does the following:<br>
<ul class="wp-block-list">
<li>tee writes the output to a file (<em>/etc/apt/sources.list.d/docker.list</em>).</li>



<li>sudo is required because writing to <em>/etc/apt/sources.list.d/</em> requires root privileges.</li>
</ul>
</li>
</ul>



<p><strong>&gt; /dev/null</strong></p>



<ul class="wp-block-list">
<li>The <em>> /dev/null</em> part discards the standard output of the tee command.<br>
<ul class="wp-block-list">
<li>This prevents unnecessary output from being displayed in the terminal.</li>



<li>Without this, tee would both write to the file and display the text on the screen.<br></li>
</ul>
</li>
</ul>



<h3 class="wp-block-heading">Install Docker from Docker Resources</h3>



<p>Now, update the package list again and install Docker.</p>



<pre class="wp-block-code"><code>sudo apt update

sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin
</code></pre>



<p>This command installs the following Docker key components using the <em>apt</em> package manager on Ubuntu (the <em>-y</em> option automatically answers &#8222;yes&#8220; to all prompts, so the install runs without asking for confirmation).</p>



<p><strong>docker-ce</strong></p>



<ul class="wp-block-list">
<li>Docker Community Edition</li>



<li>This is the core Docker Engine, the daemon that runs containers.</li>



<li>Installs the Docker server that manages images, containers, volumes, and networks.</li>
</ul>



<p><strong>docker-ce-cli</strong></p>



<ul class="wp-block-list">
<li>Docker Command-Line Interface</li>



<li>This is the docker command you use in your terminal (e.g., docker run, docker ps, etc.).</li>



<li>Separates the CLI from the engine so they can be updated independently.</li>
</ul>



<p><strong>containerd.io</strong></p>



<ul class="wp-block-list">
<li>Container runtime</li>



<li>A lightweight, powerful runtime for containers, used internally by Docker.</li>



<li>Handles the actual low-level execution of containers.</li>
</ul>



<p><strong>docker-buildx-plugin</strong></p>



<ul class="wp-block-list">
<li>BuildKit-powered Docker build plugin</li>



<li>Adds docker buildx functionality for advanced builds, multi-arch images, and caching strategies.</li>



<li>Useful when building complex container images.<br></li>
</ul>



<p><strong>Note:</strong> In some documentation you will see that the <em>sudo apt install…</em>  command will include also the <em>docker-compose-plugin</em>. The docker-compose-plugin is not required here because we are using the docker-compose stand alone packed (see below). The docker-compose-plugin is integrated into the Docker CLI and can replace the docker-compose standalone binary. But we use the standalone version because of the lightweight minimal install, the backward compatibility and the easy and independent manual version control. </p>



<p>It is highly recommended to omit the <em>docker-compose-plugin</em> from your apt install command if you plan to install the standalone version of Docker Compose binary manually as we will do later. If you have both versions installed this can cause confusion, especially if scripts assume one or the other. Also, Docker might prioritize the plugin version in newer setups which might cause conflicts in our preferred standalone Docker Compose setup. The following table illustrates the problem because the command styles differ only very little. </p>



<figure class="wp-block-table"><table><thead><tr><th>
				Type
			</th><th>
				Command Style
			</th><th>
				Notes
			</th></tr></thead><tbody><tr><td>
				Plugin version
			</td><td>
				<code>docker compose</code>
			</td><td>
				Comes as <code>docker-compose-plugin</code>, tied to Docker CLI
			</td></tr><tr><td>
				Standalone version
			</td><td>
				<code>docker-compose</code>
			</td><td>
				Installed separately, as an independent binary
			</td></tr></tbody></table><figcaption class="wp-element-caption">Docker Compose plugin versus standalone</figcaption></figure>



<p>In case the docker-compose-plugin has been installed on your system you can remove it with the following command:</p>



<pre class="wp-block-code"><code>sudo apt remove docker-compose-plugin
</code></pre>



<p>This removes the plugin version that integrates into the <em>docker compose</em> command. Later when we installed the standalone version of docker compose we use the commands instead of the <em>space</em> with a <em>dash</em> like <em>docker-compose</em>. </p>



<p>Verify that Docker is installed correctly:</p>



<pre class="wp-block-code"><code>sudo docker --version
</code></pre>



<p>Enable and start the Docker service:</p>



<pre class="wp-block-code"><code>sudo systemctl enable docker
sudo systemctl start docker
</code></pre>



<p>Test Docker by running the hello-world image.</p>



<pre class="wp-block-code"><code>sudo docker run hello-world
</code></pre>



<p>This command is a quick test to verify that Docker is installed and working correctly. </p>



<p><strong> sudo</strong></p>



<ul class="wp-block-list">
<li>Runs the command with superuser (root) privileges.</li>



<li>Required unless your user is in the docker group.</li>



<li>Docker needs elevated permissions to communicate with the Docker daemon (which runs as root).</li>
</ul>



<p><strong>docker</strong></p>



<ul class="wp-block-list">
<li>The main Docker CLI (Command-Line Interface) tool.</li>



<li>Used to interact with Docker Engine to manage containers, images, networks, volumes, etc.</li>
</ul>



<p><strong>run</strong></p>



<ul class="wp-block-list">
<li>Tells Docker to create a new container and start it based on the <em> hello-world</em> image you specify.<br>It does the following:<br>
<ul class="wp-block-list">
<li>Pulls the image (if it&#8217;s not already downloaded).</li>



<li>Creates a new container from that image.</li>



<li>Starts and runs the container.</li>



<li>Outputs the result and then exits (for short-lived containers like hello-world).</li>
</ul>
</li>
</ul>



<p><strong> hello-world</strong></p>



<ul class="wp-block-list">
<li>This is the name of the Docker image.</li>



<li>It&#8217;s an official image maintained by Docker, specifically designed to test Docker installations.<br></li>
</ul>



<h3 class="wp-block-heading">Install standalone docker-compose from Docker Resources</h3>



<p>Installing the standalone <em>docker-compose</em> is useful when you:</p>



<ul class="wp-block-list">
<li>Need compatibility with legacy tools or scripts</li>



<li>Want to control the exact version</li>



<li>Prefer a lightweight, portable binary<br></li>
</ul>



<p>The following command downloads the latest standalone <em>docker-compose</em> binary and saves it to a system-wide location.</p>



<pre class="wp-block-code"><code>sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
</code></pre>



<p> Let&#8217;s break down the command step by step:</p>



<p><strong>sudo</strong></p>



<ul class="wp-block-list">
<li>Runs the command with root privileges.<br>
<ul class="wp-block-list">
<li>Required because /usr/local/bin is a protected directory that only root can write to.</li>
</ul>
</li>
</ul>



<p><strong>curl</strong></p>



<ul class="wp-block-list">
<li>A command-line tool used to download files from the internet.</li>
</ul>



<p><strong>-L</strong></p>



<ul class="wp-block-list">
<li>Tells curl to follow redirects.</li>



<li>GitHub uses redirects for release URLs, so this flag ensures the final binary is actually downloaded.</li>
</ul>



<p><strong> &#8222;https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)“</strong></p>



<p>URL with Substitution. This ensures the correct binary is downloaded for your system. This is the dynamic download URL for the latest <em>docker-compose</em> release.</p>



<ul class="wp-block-list">
<li><em>$(uname -s)</em> returns the operating system name (e.g., Linux, Darwin).</li>



<li><em>$(uname -m)</em> returns the architecture (e.g., x86<em>64, arm64).</em></li>
</ul>



<p>Example Output: <em> https://github.com/docker/compose/releases/latest/download/docker-compose-Linux-x86<em>64</em></em></p>



<p><strong> -o /usr/local/bin/docker-compose </strong></p>



<ul class="wp-block-list">
<li>Tells curl to write (<em>-o</em> option; output) the downloaded file to <em>/usr/local/bin/docker-compose</em></li>



<li>This is a standard location for user-installed binaries that are globally available in the system PATH.<br></li>
</ul>



<p>After you run the command above you must do this. Give execution permissions. You’ll need to make the binary executable:</p>



<pre class="wp-block-code"><code>sudo chmod +x /usr/local/bin/docker-compose
</code></pre>



<p>And then check the version to confirm it worked:</p>



<pre class="wp-block-code"><code>docker-compose --version
</code></pre>



<p><strong>Note:</strong> In some cases it might be necessary to switch to the <em>docker compose plugin</em>. This mean you must remove the standalone version from your system and install the <em>docker compose plugin</em> instead. Here is how you should proceed in such a scenario:</p>



<pre class="wp-block-code"><code>#find our where docker-compose has been installed
which docker-compose
/usr/bin/docker-compose

#remove file docker-compose 
sudo rm /usr/bin/docker-compose

#intstall docker compose plugin via docker
sudo apt install docker-compose-plugin
</code></pre>



<h3 class="wp-block-heading">Final Verification</h3>



<p>Check if everything is installed correctly:</p>



<pre class="wp-block-code"><code>docker --version
docker-compose --version
node -v
npm -v
</code></pre>



<h3 class="wp-block-heading">Host-Docker-Setup for nodejs app behind nginx</h3>



<p>The code of the nodejs app is explained in detail in the article <em><a href="https://digitaldocblog.com/webdesign/sample-bootstrap-website-running-as-nodes-app/" title="Sample Bootstrap Website running as nodejs app">Bootstrap Website running as nodejs app</a></em>. </p>



<p>We have a simple website <em>funtrails</em> build in a one-page <em>index.html</em> with two sections: one section showing pictures of a Paterlini bike and one for showing pictures of a Gianni Motta bike. Each section containing an image gallery and text. The images are stored in an <em>images</em> directory. </p>



<pre class="wp-block-code"><code>  ├── .
	├── images
	├── index.html
</code></pre>



<p>Now we want to make this <em>funtrails</em> website run as a nodejs app behind a nginx reverse proxy. Both, the <em>funtrails</em> nodejs app and <em>nginx</em> should run in docker containers composed to work together. Therefore we crate the following file structure on the Host machine:</p>



<pre class="wp-block-code"><code> ├── node
	├── funtrails
        │    ├─ views    
        │       ├── images
	│	├── index.html
	├── nginx
</code></pre>



<p>We copy all our web-content into the <em>views</em> directory under <em>funtrails</em>. The nodejs app is build in the mainfile <em>app.js</em>. All Dependencies for the nodejs app are defined the file <em> package.json</em>.  </p>



<pre class="wp-block-code"><code> ├── node
	├── funtrails
        │    ├─ app.js
    	│    ├─ package.json
        │    ├─ views    
        │       ├── images
	│	├── index.html
	├── nginx
</code></pre>



<p>In the <em>Terminal</em> go into the <em>node/funtrails</em> directory. Install the dependencies.</p>



<pre class="wp-block-code"><code>npm install
</code></pre>



<p>Then we have the following structure.</p>



<pre class="wp-block-code"><code>funtrails
├── app.js
├── node_modules
├── package.json
├── package-lock.json
└── views
    ├── images
    └── index.html
</code></pre>



<p>Go to <em>node/funtrails</em>. Run a test with the following command. </p>



<pre class="wp-block-code"><code>node app.js

nodejs funtrails demo app listening on port 8080!
</code></pre>



<p>Switch to your browser and hit <em>http://localhost:8080</em> to see if all is working as expected. With <em>Ctrl c</em> in the terminal you can stop the app.</p>



<p>The <em>nginx</em> server will be started with the configuration of a <em>nginx.conf</em> file. The file <em>nginx.conf</em> will be created under <em>nginx</em>.</p>



<p><strong>Note:</strong> This file <em>nginx.conf</em> is only for testing and will be changed when we add the SSL/TLS certificates. In this configuration below the <em>nginx</em> server will listen on port 80 and pass requests received from port 80 to the <em>nodejs</em> app listening on port 8080. Processing requests from port 80 is not state of the art as these connections are not encrypted. In production we need a port 443 connection with SSL/TLS for encrypted connections.  </p>



<pre class="wp-block-code"><code>#nginx.conf

events {}

http {
	#Service node-app from docker-compose.yml
    upstream node_app {
        server node-app:8080;  
    }

    server {
        listen 80;

        location / {
            proxy_pass http://node_app;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}
</code></pre>



<p><strong>events {}</strong></p>



<ul class="wp-block-list">
<li>This is a required block in NGINX configuration, even if empty.</li>



<li>It handles connection-related events (like concurrent connections), but you don&#8217;t need to configure it unless you have advanced use cases.</li>
</ul>



<p><strong>http {</strong></p>



<ul class="wp-block-list">
<li>Starts the HTTP configuration block — this is where you define web servers, upstreams, headers, etc.</li>
</ul>



<p><strong>upstream</strong></p>



<ul class="wp-block-list">
<li>Defines a group of backend servers (can be one or many).</li>



<li><em>node_app</em> is just a name.</li>



<li>Inside: server <em>node-app:8080;</em> means:<br>
<ul class="wp-block-list">
<li>Forward traffic to the container with hostname <em>node-app</em></li>



<li>Use port 8080 (that&#8217;s where your <em>nodejs</em> app listens)</li>



<li><em>node-app</em> should match the docker-compose service name (<em>node-app</em> must be declared in your docker-compose.yml which will be explained below).</li>



<li>This lets you use <em>proxy_pass http://node<em>app</em></em> later, instead of hardcoding an IP or port.</li>
</ul>
</li>
</ul>



<p><strong>server</strong></p>



<ul class="wp-block-list">
<li>Defines a virtual server (a website or domain).</li>



<li>listen 80; tells NGINX to listen for HTTP (port 80) traffic.</li>
</ul>



<p><strong>location</strong></p>



<ul class="wp-block-list">
<li>Defines a rule for requests to / (the root URL of your site).</li>



<li>You could add more location blocks for <em>/api, /images, etc.</em> if needed.</li>



<li>Inside: <br>
<ul class="wp-block-list">
<li>proxy_pass:<br>
<ul class="wp-block-list">
<li><em>proxy_pass http://node_app;</em> Tells NGINX to forward requests to the backend defined in upstream <em>node<em>app</em></em></li>



<li>So: if you go to <em>http://yourdomain.com/</em> NGINX proxies that to <em>http://node-app:8080</em></li>
</ul>
</li>
</ul>
</li>



<li>proxy<em>set</em>header (see table)</li>
</ul>



<figure class="wp-block-table"><table><thead><tr><th>
				Header
			</th><th>
				Meaning
			</th></tr></thead><tbody><tr><td>
				<code>Host</code>
			</td><td>
				Preserves the original domain name from the client
			</td></tr><tr><td>
				<code>X-Real-IP</code>
			</td><td>
				The client’s real IP address
			</td></tr><tr><td>
				<code>X-Forwarded-For</code>
			</td><td>
				A list of all proxies the request passed through
			</td></tr><tr><td>
				<code>X-Forwarded-Proto</code>
			</td><td>
				Tells backend whether the request was via HTTP or HTTPS
			</td></tr></tbody></table><figcaption class="wp-element-caption">Nginx Proxy Variables</figcaption></figure>



<p>The <em>$variables</em> in <em>nginx.conf</em> are built-in variables that NGINX provides automatically. They are dynamically set based on the incoming HTTP request. So these variables come from the NGINX core HTTP module and you don’t need to define them or import anything. They are always available in the config. </p>



<p>Here&#8217;s what each one is and where it comes from:</p>



<p><strong>$host</strong></p>



<ul class="wp-block-list">
<li>The value of the Host header in the original HTTP request.</li>



<li>Example: If the user visits http://example.com, then $host is example.com.</li>



<li>Use case: Tells the backend app what domain the client used — useful for apps serving multiple domains.</li>
</ul>



<p><strong>$remote_addr</strong></p>



<ul class="wp-block-list">
<li>The IP address of the client making the request.</li>



<li>Example: If someone from IP 203.0.113.45 visits your site, this variable is set to 203.0.113.45.</li>



<li>Use case: Useful for logging, rate limiting, or geolocation in the backend app.</li>
</ul>



<p><strong>$proxy_add_x_forwarded_for</strong></p>



<ul class="wp-block-list">
<li>A composite header that appends the client&#8217;s IP to the existing X-Forwarded-For header.</li>



<li>Use case: Maintains a full list of proxy hops (useful if your request goes through multiple reverse proxies).</li>



<li>If X-Forwarded-For is already set (by another proxy), it appends $remote_addr to it; otherwise, it sets it to $remote_addr.</li>
</ul>



<p><strong>$scheme</strong></p>



<ul class="wp-block-list">
<li>The protocol used by the client to connect to NGINX — either http or https.</li>



<li>Example: If the user visits https://example.com, then $scheme is https.</li>



<li>Use case: Lets your backend know whether the original request was secure or not.<br></li>
</ul>



<p>Then we have the following structure.</p>



<pre class="wp-block-code"><code> ├── node
	├── funtrails
	│    ├─ app.js
    	│    ├─ package.json
        │    ├─ views    
        │       ├── images
	│	├── index.html
	├── nginx
            ├── nginx.conf
</code></pre>



<h3 class="wp-block-heading">Create Docker Image and Container for nodejs app </h3>



<p>This is to create the Docker Image with <em>docker build</em>. Then we run the Container from the image with <em>docker run</em> to test if everything is working as expected. If everything goes well we can go ahead with the <em>nginx</em> configuration and then with the composition of all together with <em>docker-compose</em>. </p>



<p>The <em>dockerization</em> process of the nodejs app in <em>node/funtrails</em> directory is controlled by the <em>Dockerfile</em> which will be created in <em>node/funtrails</em>. The <em>dockerization</em> process has the following steps.</p>



<ol class="wp-block-list">
<li>Image creation</li>



<li>Container creation from the image<br></li>
</ol>



<p>The Container can then be started, stopped and removed using terminal commands.</p>



<p>Go into in <em>node/funtrails</em>.</p>



<p>To get an Overview and check the Docker status of the system.</p>



<p>List all images. No images on your system.</p>



<pre class="wp-block-code"><code>sudo docker image ls

REPOSITORY   TAG       IMAGE ID   CREATED   SIZE  
</code></pre>



<p>List all Containers. As expected no Containers on the system.</p>



<pre class="wp-block-code"><code>sudo docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
</code></pre>



<p>List an Overview about Docker Images and Containers on your system. As expected no Containers and no Images on the system.</p>



<pre class="wp-block-code"><code>sudo docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          0         0         0B        0B
Containers      0         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     16        0         47.39MB   47.39MB
</code></pre>



<p>Create a <em>Dockerfile</em> in <em>node/funtrails/Dockerfile</em>. This file is required to build the image and run the Container.</p>



<pre class="wp-block-code"><code>#Dockerfile

#Image Build
#Install nodejs 12.22.9 for the Container
FROM node:12.22.9

#Set workdirectory for the Container
WORKDIR /home/node/app

#Change Owner and Group for Container Workdirectory to node
RUN chown node:node /home/node/app

#Run the Container with User node
USER node

#Copy all files from HOST Dir to the Container workdirectory 
COPY --chown=node:node . .

#(After COPY) Run the command to create Image for the Container
RUN npm install

#Container Start
#Open Port 8080 when the Container starts
EXPOSE 8080

#RUN the command when the Container starts
CMD &#91; "node", "app.js" ]
 
</code></pre>



<p>Create a <em>.dockerignore</em> file in <em>node/funtrails/.dockerignore</em>. The <em>hidden dockerignore</em> file exclude only files from the Host that will be copied into the image for the Container with the command <em>COPY . .</em> </p>



<pre class="wp-block-code"><code>#.dockerignore
node_modules
</code></pre>



<p><strong>Note:</strong> In case you copy files from the Host into the image directly with I.e. <em>COPY &lt;file.1&gt; &lt;file.2&gt;</em> then <em>file.1</em> and <em>file.2</em> would be copies even if they would be listed in <em>dockerignore</em>.</p>



<p>We have the following structure on the Host machine.</p>



<pre class="wp-block-code"><code>funtrails
├─ Dockerfile
├─ .dockerignore
├── app.js
├── node_modules
├── package.json
├── package-lock.json
└── views
    ├── images
    └── index.html
</code></pre>



<p>Still be in <em>node/funtrails</em>. </p>



<p><strong>Build the Docker image</strong> from the <em>Dockerfile</em> with the image name <em>node-demo</em>. The dot (.) at the end set the current directory on the Host machine for the <em>docker</em> command. This is location where <em>docker</em> is looking for the <em>Dockerfile</em> to build the image.</p>



<pre class="wp-block-code"><code>sudo docker build -t node-demo .
</code></pre>



<p>List all Docker images. 1 images just created.</p>



<pre class="wp-block-code"><code>sudo docker image ls

REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
node-demo    latest    c353353f045e   22 seconds ago   944MB
</code></pre>



<p><strong>Run the Container</strong> from the image with the name <em>node-demo</em> and give the Container the name <em>funtrails-solo-demo</em>.</p>



<pre class="wp-block-code"><code>sudo docker run -d -p 8080:8080 --name funtrails-solo-demo node-demo
</code></pre>



<p>List all Docker Containers with the option <em>-a</em>. 1 Container with the name <em>funtrails-solo-demo</em> running from the image <em>node-demo</em>.</p>



<pre class="wp-block-code"><code>sudo docker ps -a
</code></pre>



<p>Access the running app on port 8080.</p>



<pre class="wp-block-code"><code>sudo curl http://localhost:8080
</code></pre>



<p>If everything went well you get a feedback in the terminal showing the HTML code. In this case the Test was successful.</p>



<p>Stop the running Docker Container with the name <em>funtrails-solo-demo</em>.</p>



<pre class="wp-block-code"><code>sudo docker stop funtrails-solo-demo
</code></pre>



<p>List all Containers with the option <em>-a</em>. 1 Container from the image <em>node-demo</em> with the name <em>funtrails-solo-demo</em> is <em>EXITED</em>.</p>



<pre class="wp-block-code"><code>sudo docker ps -a
</code></pre>



<p>List all images. Still 1 image available.</p>



<pre class="wp-block-code"><code>sudo docker image ls

REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
node-demo    latest    c353353f045e   36 hours ago   944MB
</code></pre>



<p>List an Overview about Docker Images and Containers on the system. 1 active Image and 1 <em>not Active</em> Container. Status of the Container is <em>EXITED</em> as we have seen above.</p>



<pre class="wp-block-code"><code>sudo docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          1         1         944MB     0B (0%)
Containers      1         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     18        0         47.39MB   47.39MB
</code></pre>



<p>To clean up your system use the following commands.</p>



<figure class="wp-block-table"><table><thead><tr><th>
				Target
			</th><th>
				Command
			</th></tr></thead><tbody><tr><td>
				Delete exited containers
			</td><td>
				<code>sudo docker container prune</code>
			</td></tr><tr><td>
				Delete unused images
			</td><td>
				<code>sudo docker image prune</code>
			</td></tr><tr><td>
				Delete unused volumes
			</td><td>
				<code>sudo docker volume prune</code>
			</td></tr><tr><td>
				Complete Housekeeping (attention!)
			</td><td>
				<code>sudo docker system prune -a</code>
			</td></tr></tbody></table><figcaption class="wp-element-caption">Docker commands for clean up</figcaption></figure>



<p>The full clean up (be careful).</p>



<pre class="wp-block-code"><code>sudo docker system prune -a

sudo docker system df

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          0         0         0B        0B
Containers      0         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     0         0         0B        0B
</code></pre>



<h3 class="wp-block-heading">Configure docker-compose</h3>



<p>Go back to the <em>node</em> directory and create a <em> docker-compose.yml</em> file there.</p>



<p>Then we have the following structure.</p>



<pre class="wp-block-code"><code> ├── node
     ├── docker-compose.yml
     ├── funtrails
     │	   ├─ Dockerfile
     │	   ├─ .dockerignore
     │     ├─ app.js
     │     ├─ node_modules
     │	   ├─ package.json
     │	   ├─ package-lock.json
     │     ├─ views    
     │         ├── images
     │         ├── index.html
     ├── nginx
           ├── nginx.conf
</code></pre>



<p><em>docker-compose</em> is a tool that helps you define and run multi-container Docker applications using a YAML file. Instead of running multiple docker run commands, you describe everything in one file and start it all with the command <em>docker-compose up</em>.</p>



<p>Create <em> docker-compose.yml</em> with the following content.</p>



<pre class="wp-block-code"><code>#docker-compose.yml

services:
  funtrails:
    build: ./funtrails
    container_name: funtrails 
    networks:
      - funtrails-network

  nginx:
    image: nginx:latest
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - funtrails
    networks:
      - funtrails-network

networks:
  funtrails-network:
    driver: bridge
</code></pre>



<p><strong>services</strong></p>



<p>This section defines containers that make up your app</p>



<ul class="wp-block-list">
<li>funtrails<br>
<ul class="wp-block-list">
<li>build: ./funtrails<br>Builds the image from the Dockerfile inside the ./funtrails directory.</li>



<li>container_name: funtrails<br>Names the container funtrails instead of a random name.</li>



<li>networks: funtrails-network<br>Connects the container to a custom user-defined network. </li>
</ul>
</li>



<li>nginx</li>



<li>image: nginx:latest<br>Uses the official latest NGINX image.</li>



<li>container_name: nginx-proxy<br>Container will be named nginx-proxy.</li>



<li>ports: &#8222;80:80&#8220;</li>



<li>Exposes Host port 80 to Container port 80.</li>



<li>volumes:<br>Mounts your local nginx.conf into the container, read-only (:ro).</li>



<li>depends_on: funtrails<br>Ensures funtrails is started before nginx.</li>



<li>networks: funtrails-network<br>Both services are in the same network, so they can communicate by name.</li>
</ul>



<p><strong>networks</strong></p>



<ul class="wp-block-list">
<li>Creates a custom bridge network named funtrails-network.</li>



<li>Ensures containers can resolve each other by name (funtrails, nginx).<br></li>
</ul>



<p><strong>Note:</strong> We are using the official NGINX image directly (<em>image: nginx:latest</em>). This image is prebuilt and includes everything NGINX needs to run. </p>



<p>We don&#8217;t need to write a custom Dockerfile because we don’t  want to:</p>



<ul class="wp-block-list">
<li>Add extra modules</li>



<li>Customize the image beyond just the config</li>



<li>Install additional tools</li>



<li>Include SSL certs directly, etc.</li>
</ul>



<p>Instead, we simply mount our own <em>nginx.conf</em> into the container using a volume. This tells Docker <em>Use the official NGINX image, but replace its config file with mine</em>. We would use a Dockerfile in the <em>nginx</em> directory if we need to build a custom NGINX image, for example to copy SSL certs directly into the image.</p>



<p>Example:</p>



<pre class="wp-block-code"><code>FROM nginx:latest
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./certs /etc/nginx/certs
</code></pre>



<p>But for most use cases like reverse proxying a nodejs app, just mounting your own config file is perfectly sufficient and simpler.</p>



<p><strong>Note:</strong>We integrate SSL in the next chapter using free <em>Lets Encrypt</em> certificates.</p>



<h3 class="wp-block-heading">Integrate SSL certificates &#8211; free Lets Encrypt</h3>



<p>To integrate SSL we need to do the following steps:</p>



<ol class="wp-block-list">
<li>Prepare your Domain</li>



<li>Install <em>certbot</em></li>



<li>Create Lets Encrypt SSL certificates</li>



<li>Adapt your <code>node/docker-compose.yml</code></li>



<li>Adapt your <code>node/nginx/nginx.conf</code></li>



<li>Create a cron-Job to renew SSL certificates<br></li>
</ol>



<p><strong>prepare the domain</strong></p>



<p>You must own a domain like <em>example.com</em> and you must have access to your DNS-servers to adapt the <em>A-Record</em>. Here in my example I create a subdomain <em>funtrails.example.com</em> and create on my DNS an <em>A-Record</em> for <em>funtrails.example.com</em> that point to the servers IP-Adress.</p>



<p><strong>install certbot</strong></p>



<p>To install our certificates for SSL we use a tool called <em>certbot</em>. We install <em>certbot</em> with <code>apt</code> on our Linux machine.</p>



<pre class="wp-block-code"><code>sudo apt update
sudo apt install certbot
</code></pre>



<p><strong>create Lets Encrypt SSL certificates</strong></p>



<p>We create the SSL certificates with certbot.</p>



<pre class="wp-block-code"><code>sudo certbot certonly --standalone -d funtrails.example.com

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Enter email address (used for urgent renewal and security notices)
 (Enter 'c' to cancel): &lt;your-email&gt;@funtrails.example.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.5-February-24-2025.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
(Y)es/(N)o: Y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
Account registered.
Requesting a certificate for funtrails.example.com

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/funtrails.example.com/fullchain.pem

Key is saved at:         /etc/letsencrypt/live/funtrails.example.com/privkey.pem

This certificate expires on 2025-07-25.

These files will be updated when the certificate renews.

Certbot has set up a scheduled task to automatically renew this certificate in the background.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
If you like Certbot, please consider supporting our work by:
 * Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 * Donating to EFF:                    https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
</code></pre>



<p>To use SSL you need </p>



<ul class="wp-block-list">
<li>a server certificate (e.g. certificate.crt)</li>



<li>a private key (e.g. private.key) and </li>



<li>the CA certificate (e.g. ca.crt).</li>
</ul>



<figure class="wp-block-table"><table><thead><tr><th>
				File
			</th><th>
				Description
			</th><th>
				Comment
			</th></tr></thead><tbody><tr><td>
				<strong>private.key</strong>
			</td><td>
				Private secret key &#8211; keep this key strictly secret !
			</td><td>
				Only your server knows this key
			</td></tr><tr><td>
				<strong>certificate.crt</strong>
			</td><td>
				Your server certificate (proves your identity)
			</td><td>
				Issued by the CA (Let&#8217;s Encrypt)
			</td></tr><tr><td>
				<strong>ca.crt / chain.crt</strong>
			</td><td>
				The certificate chain up to the root CA
			</td><td>
				So that clients trust your certificate
			</td></tr></tbody></table><figcaption class="wp-element-caption">SSL standard certificates</figcaption></figure>



<p><em>certbot</em> create these file in the following directory on your Host server.</p>



<pre class="wp-block-code"><code>/etc/letsencrypt/live/funtrails.example.com
</code></pre>



<pre class="wp-block-code"><code>sudo ls -l /etc/letsencrypt/live/funtrails.example.com

cert.pem -&gt; ../../archive/funtrails.example.com/cert1.pem
chain.pem -&gt; ../../archive/funtrails.example.com/chain1.pem
fullchain.pem -&gt; ../../archive/funtrails.example.com/fullchain1.pem
privkey.pem -&gt; ../../archive/funtrails.example.com/privkey1.pem
</code></pre>



<p>The translation to the standard is as follows.</p>



<figure class="wp-block-table"><table><thead><tr><th>
				File
			</th><th>
				Description
			</th></tr></thead><tbody><tr><td>
				<code>privkey.pem</code>
			</td><td>
				Your private key (= private.key)
			</td></tr><tr><td>
				<code>cert.pem</code>
			</td><td>
				Your server certificate (= certificate.crt)
			</td></tr><tr><td>
				<code>chain.pem</code>
			</td><td>
				The CA certificates (= ca.crt)
			</td></tr><tr><td>
				<code>fullchain.pem</code>
			</td><td>
				Server certificate + CA chain together
			</td></tr></tbody></table><figcaption class="wp-element-caption">LetsEncrypt translation to standard SSL certificates</figcaption></figure>



<p><strong>adapt node/docker-compose.yml</strong></p>



<p>The <em>docker-compose.yml</em> will be adapted as follows.</p>



<pre class="wp-block-code"><code>services:
  funtrails:
    build: ./funtrails
    container_name: funtrails
    networks:
      - funtrails-network

  nginx:
    image: nginx:latest
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - /etc/letsencrypt:/etc/letsencrypt:ro
    depends_on:
      - funtrails
    networks:
      - funtrails-network

networks:
  funtrails-network:
    driver: bridge
</code></pre>



<p>We create a bridge network with the name <em>funtrails-network</em> and both services are running in this network. This is important to reach all services by their container name(s). </p>



<p>The <em>funtrails</em> service will rebuild from the <em>Dockerfile</em> in <em>./funtrails</em>. For the <em>nginx</em> service the <em>nginx-image</em> will be loaded from the Docker resources in the latest version. For the <em>nginx</em> image it is defined that the Host ports 80 and 443 will be mapped into the Container ports 80 and 443. When the Container is started we mount the Host files <em>./nginx/nginx.conf</em> and the SSL certificates unter <em>/etc/letsencrypt</em> into the Container. Both will be loaded with read only!</p>



<pre class="wp-block-code"><code>...
volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - /etc/letsencrypt:/etc/letsencrypt:ro
...
</code></pre>



<p>With the <em>depends<em>on</em></em> directive we declare that first the <em>funtrails</em> service must be started and then <em>nginx</em>. </p>



<p><strong>adapt node/nginx/nginx.conf</strong></p>



<p>The file will be adapted as follows.</p>



<pre class="wp-block-code"><code>events {
  worker_connections 1024; 
}

http {

    server {
      listen 80;
      server_name funtrails.example.com;

      # Redirect HTTP -&gt; HTTPS
      return 301 https://$host$request_uri;
    }

   server {
    listen 443 ssl;
    server_name funtrails.example.com;

    ssl_certificate /etc/letsencrypt/live/funtrails.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/funtrails.example.com/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;

    location / {
        proxy_pass http://funtrails:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}

</code></pre>



<p>The <em>events{}</em> Block is a required block in NGINX configuration, even if empty. It handles connection-related events (like concurrent connections), but you don&#8217;t need to configure it unless you have advanced use cases. Here I configured 1024 concurrent connections. </p>



<p>Within the <em>http</em> block we have 2 virtual server-blocks. The first server-block define that server <em> funtrails.example.com</em> is listening to port 80 (HTTP) but all requests to this port will immediately be redirected to port 443 (HTTPS). The second server-block define that server <em> funtrails.example.com</em> is listening also to port 443 (HTTPS) followed by the location of the SSL certificate and SSL key on our local Host and the protocol definition. </p>



<p>The location-block define a rule for requests to / (the root URL of your site). You could add more location blocks i.e. for /api, /images, etc. if needed. In this config, we are skipping the upstream block and directly writing <em>proxy<em>pass</em></em>. The <em>proxy<em>pass</em></em> tells NGINX to forward requests to port 8080 of the backend service defined in <em>docker-compose.yml</em>. This backend service in <em>docker-compose.yml</em> is defined with the <em>container_name</em> directive which is set to <em>funtrails</em>. </p>



<pre class="wp-block-code"><code>...
services:
  funtrails:
	build: ./funtrails
    container_name: funtrails
	networks:
	   - funtrails-network
... 
</code></pre>



<p>Docker Compose creates an internal Docker network <em>funtrails-network</em>, and all services can reach each other by their service name(s) as hostname(s). So nginx can resolve <em>funtrails</em> because it&#8217;s part of the same Docker network (no need for a manual upstream block).</p>



<p>These other <em>$variables</em> come from NGINX&#8217;s core HTTP module, so we don’t need to define them. They are always available in the config. </p>



<p><strong>$host</strong></p>



<ul class="wp-block-list">
<li>What it is: The value of the Host header in the original HTTP request.</li>



<li>Example: If the user visits http://example.com, then $host is example.com.</li>



<li>Use case: Tells the backend app what domain the client used — useful for apps serving multiple domains.</li>
</ul>



<p><strong>$remote<em>addr</em></strong></p>



<ul class="wp-block-list">
<li>What it is: The IP address of the client making the request.</li>



<li>Example: If someone from IP 203.0.113.45 visits your site, this variable is set to 203.0.113.45.</li>



<li>Use case: Useful for logging, rate limiting, or geolocation in the backend app.</li>
</ul>



<p><strong>$proxy<em>add</em>x<em>forwarded</em>for</strong></p>



<ul class="wp-block-list">
<li>What it is: A composite header that appends the client&#8217;s IP to the existing X-Forwarded-For header.</li>



<li>Use case: Maintains a full list of proxy hops (useful if your request goes through multiple reverse proxies).</li>



<li>How it works: If X-Forwarded-For is already set (by another proxy), it appends $remote<em>addr to it; otherwise, it sets it to $remote</em>addr.</li>
</ul>



<p><strong>$scheme</strong></p>



<ul class="wp-block-list">
<li>What it is: The protocol used by the client to connect to NGINX — either http or https.</li>



<li>Example: If the user visits https://example.com, then $scheme is https.</li>



<li>Use case: Lets your backend know whether the original request was secure or not.<br></li>
</ul>



<p><strong> Create a cron-Job to renew Lets Encrypt SSL certificates</strong></p>



<p>Lets Encrypt SSL Certificates must be renewed after 90 days. <em>certbot</em> can renew your certificates. To automize the renewal you can create a cronjob on your Host machine. </p>



<p><strong>Note:</strong> Sometimes cron doesn&#8217;t know where docker-compose is located (because the environment variables are missing). Therefore, it&#8217;s safer to use the full paths in crontab. You check the relevant paths as follows:</p>



<pre class="wp-block-code"><code>which docker-compose
/usr/bin/docker-compose

which certbot
/usr/bin/certbot
</code></pre>



<p>The create a cronjob in crontab of the user root (use sudo):</p>



<pre class="wp-block-code"><code>sudo crontab -e 

0 3 *&nbsp;* *&nbsp;	/usr/bin/certbot renew --quiet &amp;&amp; /usr/bin/docker-compose restart nginx
</code></pre>



<p>With <em>sudo crontab -e </em> you create a crontab for the user root. All commands within the root crontab will be executed with root privileges.  </p>



<p><em>certbot renew</em> checks all certificates for expiration dates and automatically renews them.</p>



<p><em>docker-compose restart nginx</em> ensures that nginx is reloaded so that it can apply the new certificates. Otherwise, nginx would still be using old certificates even though new ones are available. With the command you call <em>docker-compose restart &lt;service-name&gt;</em>. Here you specify the service name from <em>docker-compose.yml</em> not the container name.</p>



<p><strong>Note:</strong> In case you would call <em>crontab -e</em> (without <em>sudo</em>) you would edit your own user crontab. This crontab then runs under your user, not as root and the tasks in crontab run unter this normal user. When you renew SSL certificates using <em>certbot renew</em> this job must write into the directories under <em>/etc/letsencrypt/</em> on your machine. But these directories are owned by root. So you cannot write to the directories when the crontab runs under a normal user. One might then think that the commands in the crontab of a normal user should be executed with <em>sudo</em>. If a crontab job of your normal user (i.e. patrick) is running and sudo is used in the command, then sudo will attempt to prompt for the password. But there is no terminal in crontab where you can enter a password. Therefore, the command will fail (error in the log, nothing happens). Therefore it is essential here to edit the crontab of the user root with <em>sudo crontab -e</em>. </p>



<p>Finally you can check the crontab of your own or the crontab as root as follows.</p>



<pre class="wp-block-code"><code>crontab -l
no crontab for patrick

sudo crontab -l
# Edit this file to introduce tasks to be run by cron.
# 
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
# 
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
# 
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
# 
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
# 
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# 
# For more information see the manual pages of crontab(5) and cron(8)
# 
# m h  dom mon dow   command

0 3 * * * /usr/bin/certbot renew --quiet &amp;&amp; /usr/bin/docker-compose restart nginx
 
</code></pre>



<p>You can check the logs for the renewal process using the following command.</p>



<pre class="wp-block-code"><code>sudo cat /var/log/letsencrypt/letsencrypt.log
</code></pre>



<h3 class="wp-block-heading">Start the Containers with docker-compose</h3>



<p>Navigate to the directory with <em>docker-compose.yml</em>. Then use the following commands.</p>



<pre class="wp-block-code"><code>sudo docker-compose build

sudo docker-compose up -d
</code></pre>



<p>The command <em>docker-compose build</em> reads the Dockerfile for each service defined in <em>docker-compose.yml</em> and builds the Docker image accordingly. The command <em>docker-compose up -d</em> run the container(s) and the network. This starts all services defined in the <em>docker-compose.yml</em> and links them via the defined docker network. The -d flag runs the containers in the background (detached mode).</p>



<p>Then you can check the status using the following commands.</p>



<pre class="wp-block-code"><code>sudo docker-compose ps

sudo docker ps
</code></pre>



<p>Here is an overview of the most important commands.</p>



<figure class="wp-block-table"><table><thead><tr><th>
				Command
			</th><th>
				Purpose
			</th></tr></thead><tbody><tr><td>
				<code>docker-compose build</code>
			</td><td>
				Build all images from Dockerfiles
			</td></tr><tr><td>
				<code>docker-compose up -d</code>
			</td><td>
				Start containers in the background
			</td></tr><tr><td>
				<code>docker-compose ps</code>
			</td><td>
				See status of containers
			</td></tr><tr><td>
				<code>docker-compose down</code>
			</td><td>
				Stop and remove all containers
			</td></tr><tr><td>
				<code>docker-compose logs -f</code>
			</td><td>
				Follow logs of all services
			</td></tr></tbody></table><figcaption class="wp-element-caption">Docker-compose commands</figcaption></figure>



<h3 class="wp-block-heading">How to manage Changes made to the application code</h3>



<p>When we make changes to the app code i.e. in <em>node/funtrails/app.js</em> or in <em>node/funtrails/Dockerfile</em> we need to rebuild the image for the funtrails service defined in <em>node/docker-compose.yml</em>. In such a change scenario it is not necessary to stop the containers with <em>docker-compose down</em> before you rebuild the image with <em>docker-compose build</em>.</p>



<p>You can rebuild and restart only the <em>funtrails</em> service with the following commands.</p>



<pre class="wp-block-code"><code>docker-compose build funtrails

docker-compose up -d funtrails
</code></pre>



<p>This will:</p>



<ul class="wp-block-list">
<li>Rebuild the <em>funtrails</em> image</li>



<li>Stop the old <em>funtrails</em> container (if running)</li>



<li>Start a new container using the updated image</li>



<li>Without affecting other services like <em>nginx</em><br></li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Create a virtual Hacking Lab on Apple Silicon Mac</title>
		<link>https://digitaldocblog.com/mac/create-a-virtual-hacking-lab-on-apple-silicon-mac/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 12 Apr 2023 05:36:31 +0000</pubDate>
				<category><![CDATA[Mac OS]]></category>
		<category><![CDATA[Homebrew]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=214</guid>

					<description><![CDATA[When you are a cyber security consultant, a pen tester or even a system operator with technical interest then you want to perform attacks on systems to understand exactly how&#8230;]]></description>
										<content:encoded><![CDATA[
<p>When you are a cyber security consultant, a pen tester or even a system operator with technical interest then you want to perform attacks on systems to understand exactly how hackers work in real life and which vulnerabilities are exploited in order to take control of the system. If you know how hackers attack your system, you can protect it better. </p>



<p>Under no circumstances should you start attacks on systems located on the internet, even if the systems belong to you, because from a legal point of view it is always a risk and you can get into trouble. You should build a game environment in your local network in which you install attacker and victim system. It is important that you only use private IP addresses and that the game environment does not route any data traffic to and from the Internet. This gaming environment is then your <strong>Hacking Lab</strong> in which you can try out different attack scenarios safely and securely.</p>



<p>The idea is to install a virtualization software on the operating system of the host computer which is in my case an M2 Apple Mac with Ventura 13.3. The virtualization software I use is <a href="https://mac.getutm.app" title="UTM for Mac">UTM</a>. UTM create the virtual guest machines in a virtual Lan (Vlan) isolated from the host operating system and the local network. The virtual guest machines can talk to each other but can not access the local network or the internet via the host. The host can talk to the virtual guest machines. This network mode is defined as <strong>host only</strong> mode.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img fetchpriority="high" decoding="async" width="3950" height="1141" src="https://digitaldocblog.com/wp-content/uploads/2023/04/01-network.png" alt="Network setup for the Hacking Lab" class="wp-image-211" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/01-network.png 3950w, https://digitaldocblog.com/wp-content/uploads/2023/04/01-network-300x87.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/01-network-1024x296.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/01-network-768x222.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/01-network-1536x444.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/01-network-2048x592.png 2048w" sizes="(max-width: 3950px) 100vw, 3950px" /><figcaption class="wp-element-caption">Network setup for the Hacking Lab</figcaption></figure>
</div>


<p><em>Note: UTM provide various network operating modes. Details can be read in the <a href="https://docs.getutm.app/settings-qemu/devices/network/network/" title="UTM network modes">UTM documentation</a>.</em></p>



<p>The installation of UTM can be done in two ways:</p>



<ol class="wp-block-list">
<li>Payed version in the Apple <a href="https://apps.apple.com/de/app/utm-virtual-machines/id1538878817?mt=12" title="UTM in the App Store">App Store</a></li>



<li>Free download from the <a href="https://mac.getutm.app" title="Download UTM">UTM site</a></li>
</ol>



<p>I recommend to install the payed version from the App Store. You will get all updates whenever there are updates available and you support the development of this fantastic tool. At the time of writing this article the costs for the payed version of UTM were 11.99 Euro.</p>



<p>Once UTM has been installed we need to create the attacker and the victim as virtual guest machines in UTM. The attacker is a <a href="https://www.kali.org/get-kali/#kali-installer-images" title="Get Kali Linux Images">Kali Linux Image</a> the victim will be a host prepared with vulnarabilities from <a href="https://www.vulnhub.com" title="Vulnerable Hub. Hosts to be attacked">vulnerable hub</a>. </p>



<p>On vulnerable hub you find vulnerable virtual machines. These machines are prepared from cyber security enthusiasts for other security enthusiasts. These machines are available with vulnerabilities and created specifically for the purpose of hacking them. </p>



<h3 class="wp-block-heading">Create a vulnerable virtual machine in UTM</h3>



<p>To create a victim in UTM you search for a machine which best suits your needs and download the virtual machine from vulnerable hub which is available there in the <a href="https://docs.fileformat.com/disc-and-media/ova/" title="OVA File format explained">.ova format</a>. OVA is basically a tar based archive that contains among other things, an <a href="https://docs.fileformat.com/disc-and-media/ovf/" title=".ovf File format">.ovf file</a> with the specification of the virtual machine and in most cases disk image files in the format .vdi and .vmdk. Other files contained in the .ova file we don&#8217;t consider.</p>



<p>On vulnerable hub we <a href="https://www.vulnhub.com/?q=Robot" title="search for Robot">search for Robot</a> and find 3 search results. We look for <a href="https://www.vulnhub.com/entry/mr-robot-1,151/" title="Mr-Robot: 1 vulnerable machine">Mr-Robot: 1</a>, click on it and find the download link to download the .ova file for the vulnerable virtual machine.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/02-Vulnerable-Hub-Search-Robot-1.jpeg" alt="Find Mr-Robot: 1 an volnerable hub" class="wp-image-201" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/02-Vulnerable-Hub-Search-Robot-1.jpeg 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/02-Vulnerable-Hub-Search-Robot-1-300x195.jpeg 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/02-Vulnerable-Hub-Search-Robot-1-1024x666.jpeg 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/02-Vulnerable-Hub-Search-Robot-1-768x499.jpeg 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/02-Vulnerable-Hub-Search-Robot-1-1536x999.jpeg 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/02-Vulnerable-Hub-Search-Robot-1-2048x1332.jpeg 2048w" sizes="(max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Find Mr-Robot: 1 an volnerable hub</figcaption></figure>
</div>


<p>   To install Mr-Robot: 1 as virtual machine in UTM we first must unpack the .ova file. </p>



<pre class="wp-block-code"><code>patrick % ls
total 2884016
mrRobot.ova

patrick % tar -xvf mrRobot.ova 
x mrRobot.ovf
x mrRobot.mf
x mrRobot-disk1.vmdk

patrick % ls
total 2884016
mrRobot-disk1.vmdk mrRobot.mf mrRobot.ova mrRobot.ovf

</code></pre>



<p>Then we convert the disk image file mrRobot-disk1.vmdk into the .qcow2 format which is the supported format by UTM. If you need some more detailed information there is a very good explanation on <a href="https://www.xmodulo.com/convert-ova-to-qcow2-linux.html" title="How to convert ova to qcow2">Xmodulo</a>. </p>



<p>To convert the disk image from into .qcow2 we need the utility qemu-img. I install the complete <a href="https://formulae.brew.sh/formula/qemu#default" title="Qemu Emulator on homebrew">qemu emulator</a> from <a href="https://formulae.brew.sh" title="Homebrew">homebrew</a> as the qemu utilities are not available as single option. If you don&#8217;t know homebrew or in case homebrew is not installed on your Mac just go to my blog site <a href="https://digitaldocblog.com" title="Digitaldocblog">digitaldocblog</a> and read <a href="https://digitaldocblog.com/mac/how-to-setup-homebrew-on-mac-os/" title="How to setup homebrew on a Mac">the article</a> how to intstall and use homebrew on your Mac.</p>



<pre class="wp-block-code"><code>patrick %  brew install qemu

patrick % ls
total 2884016
mrRobot-disk1.vmdk mrRobot.mf mrRobot.ova mrRobot.ovf

patrick % qemu-img convert -O \
qcow2 mrRobot-disk1.vmdk mrRobot.qcow2

patrick % ls
total 2884016
mrRobot-disk1.vmdk mrRobot.mf mrRobot.ova mrRobot.ovf
mrRobot.qcow2

</code></pre>



<p>Then you start UTM from the main menue and click create new virtual machine. </p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/03-UTM-Create-Virt-Machine.jpeg" alt="UTM main menue" class="wp-image-209" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/03-UTM-Create-Virt-Machine.jpeg 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/03-UTM-Create-Virt-Machine-300x195.jpeg 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/03-UTM-Create-Virt-Machine-1024x666.jpeg 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/03-UTM-Create-Virt-Machine-768x499.jpeg 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/03-UTM-Create-Virt-Machine-1536x999.jpeg 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/03-UTM-Create-Virt-Machine-2048x1332.jpeg 2048w" sizes="(max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">UTM main menue</figcaption></figure>
</div>


<p>Then you click on <strong>Emulate</strong>. You click on Emulate because the virtual machine you are going to install is compiled for the x86 64 Intel CPU Architechture. This Intel CPU must be emulated by UTM to provide the basis that your host can run as guest on your Apple ARM Silicon Mac (M1 and M2). In the next chapter when we install the attacker Linux we choose a virtual machine that is compiled for the Apple ARM 64 Silicon architechture. Then we click Virtualize because this guest machine is compiled for your host architecture (M1 or M2).</p>



<p>After Emulate you click on <strong>Custom</strong>. Then you <strong>skip ISO boot</strong>, accept in the following all default values ​​and assign a name fitting your needs for your new virtual host. At the end, the configuration is completed with safe.</p>



<p>In the left panel you see the virtual machine you just created. Mark it and click on the <strong>settings menue</strong> at the top right side.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/04-UTM-virtual-machine-settings-2.png" alt="Virtual Machine Settings" class="wp-image-199" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/04-UTM-virtual-machine-settings-2.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/04-UTM-virtual-machine-settings-2-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/04-UTM-virtual-machine-settings-2-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/04-UTM-virtual-machine-settings-2-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/04-UTM-virtual-machine-settings-2-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/04-UTM-virtual-machine-settings-2-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Virtual Machine Settings</figcaption></figure>
</div>


<p>In the left panel of the settings go to the <strong>Drives section</strong> and delete the IDE drive. Stay in the Drives section and create a new drive. Here you click on Import and select the .qcow2 file on your hard disk (the one you created as you converted the .vmdk file). After this step go to the <strong>QEMU section</strong> where you unselect UEFI Boot. To set the correct network settings go to <strong>Network section</strong> and select Host only. Finally you click on safe and end the configuration.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/07-UTM-network-1.png" alt="Virtual machine network settings" class="wp-image-203" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/07-UTM-network-1.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/07-UTM-network-1-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/07-UTM-network-1-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/07-UTM-network-1-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/07-UTM-network-1-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/07-UTM-network-1-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Virtual machine network settings</figcaption></figure>
</div>


<h3 class="wp-block-heading">Create the attacker virtual machine in UTM</h3>



<p>To launch attacks against a vulnerable virtual host I recommend installing a standard Kali Linux machine in UTM. Most tools are already pre-installed here and you can start immediately. </p>



<p>We go on the Kali.org website an there into the section <a href="https://www.kali.org/get-kali/" title="Get Kali on Kali.org">get Kali</a>. I choose installer images and then we select the ARM Silicon (ARM64) for download the Kali especially compiled for our Apple ARM Silicon M2 architecture (if you have a M1 this is also working fine). Click on the recommended installer and download the iso image to your hard drive.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/08-Kali-Installer-Image-1.png" alt="Kali Linux Installer for ARM64" class="wp-image-204" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/08-Kali-Installer-Image-1.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/08-Kali-Installer-Image-1-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/08-Kali-Installer-Image-1-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/08-Kali-Installer-Image-1-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/08-Kali-Installer-Image-1-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/08-Kali-Installer-Image-1-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Kali Linux Installer for ARM64</figcaption></figure>
</div>


<p>After you have downloaded the iso image the installation will be performed in 2 steps. First you create a new virtual machine in virtualization mode and then in step 2 you install the Kali Linux on it.</p>



<p>Click on the + sign in the top menue and then select Virtualize.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/09-Install-Virtual-Machine-Virtualization-Mode-1.png" alt="Install virtual machine in virtualization mode" class="wp-image-206" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/09-Install-Virtual-Machine-Virtualization-Mode-1.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/09-Install-Virtual-Machine-Virtualization-Mode-1-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/09-Install-Virtual-Machine-Virtualization-Mode-1-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/09-Install-Virtual-Machine-Virtualization-Mode-1-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/09-Install-Virtual-Machine-Virtualization-Mode-1-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/09-Install-Virtual-Machine-Virtualization-Mode-1-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Install virtual machine in virtualization mode</figcaption></figure>
</div>


<p>After that you can select the OS you want to install. Here you click on Linux and  here you select the iso image you downloaded from Kali in the Boot-Iso-Image section.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/10-Select-ISO-Image-5.png" alt="Select the iso image in the Boot-Iso-Image section" class="wp-image-196" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/10-Select-ISO-Image-5.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/10-Select-ISO-Image-5-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/10-Select-ISO-Image-5-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/10-Select-ISO-Image-5-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/10-Select-ISO-Image-5-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/10-Select-ISO-Image-5-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Select the iso image in the Boot-Iso-Image section</figcaption></figure>
</div>


<p>Then you click continue and keep the default value for the RAM but set the CPU cores to 4.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/11-CPU-Cores-To-4-3.png" alt="Set CPU Cores to 4" class="wp-image-198" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/11-CPU-Cores-To-4-3.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/11-CPU-Cores-To-4-3-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/11-CPU-Cores-To-4-3-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/11-CPU-Cores-To-4-3-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/11-CPU-Cores-To-4-3-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/11-CPU-Cores-To-4-3-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Set CPU Cores to 4</figcaption></figure>
</div>


<p>In the following you can keep all default values i.e. for storage and always click on continue until you come to the Summary section. Here you can give you new virtual machine a name according to your needs. Finally you click on safe.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/12-Summary-Virtual-Host-1.png" alt="Give your virtual machine a name" class="wp-image-208" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/12-Summary-Virtual-Host-1.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/12-Summary-Virtual-Host-1-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/12-Summary-Virtual-Host-1-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/12-Summary-Virtual-Host-1-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/12-Summary-Virtual-Host-1-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/12-Summary-Virtual-Host-1-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">Give your virtual machine a name</figcaption></figure>
</div>


<p>Then click on the settings to open the settings menue. Go to the Devices section and click on new and add a new serial device emulation. This is very important for the installation of Kali Linux on your virtual machine otherwise the installer will not start. </p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2208" height="1436" src="https://digitaldocblog.com/wp-content/uploads/2023/04/13-New-Serial-Device-Emulation-5.png" alt="New Serial Device Emulation" class="wp-image-197" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/13-New-Serial-Device-Emulation-5.png 2208w, https://digitaldocblog.com/wp-content/uploads/2023/04/13-New-Serial-Device-Emulation-5-300x195.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/13-New-Serial-Device-Emulation-5-1024x666.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/13-New-Serial-Device-Emulation-5-768x499.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/13-New-Serial-Device-Emulation-5-1536x999.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/13-New-Serial-Device-Emulation-5-2048x1332.png 2048w" sizes="auto, (max-width: 2208px) 100vw, 2208px" /><figcaption class="wp-element-caption">New Serial Device Emulation</figcaption></figure>
</div>


<p> Run the virtual machine to start the installation. Here you see that 2 windows will be opened. Choose the window that says terminal one and start the installation. Don’t choose Graphical installation. Then follow the installation steps. You can see a very good <a href="https://www.youtube.com/watch?v=9zdjQ9w_v_4" title="How To Install Kali Linux 2022 On M1 / M2 Mac Using UTM">video on YouTube</a> that guide you through the standard installation.</p>



<p>Once you see the finish installation screen then use the navigation bar of this window terminal 1 and shut down the virtual machine. Close the second window that has been opened when you started with the installation.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="1288" height="924" src="https://digitaldocblog.com/wp-content/uploads/2023/04/14-Installation-Completed-1.png" alt="Installation Completed. Shut down." class="wp-image-205" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/14-Installation-Completed-1.png 1288w, https://digitaldocblog.com/wp-content/uploads/2023/04/14-Installation-Completed-1-300x215.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/14-Installation-Completed-1-1024x735.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/14-Installation-Completed-1-768x551.png 768w" sizes="auto, (max-width: 1288px) 100vw, 1288px" /><figcaption class="wp-element-caption">Installation Completed. Shut down.</figcaption></figure>
</div>


<p>Then you go to the main window, select the virtual machine you just installed, go to the main window and unmount the iso at the bottom of the page.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2006" height="1480" src="https://digitaldocblog.com/wp-content/uploads/2023/04/15-Unmount-iso-1.png" alt="Unmount the iso" class="wp-image-200" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/15-Unmount-iso-1.png 2006w, https://digitaldocblog.com/wp-content/uploads/2023/04/15-Unmount-iso-1-300x221.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/15-Unmount-iso-1-1024x755.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/15-Unmount-iso-1-768x567.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/15-Unmount-iso-1-1536x1133.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/15-Unmount-iso-1-120x90.png 120w" sizes="auto, (max-width: 2006px) 100vw, 2006px" /><figcaption class="wp-element-caption">Unmount the iso</figcaption></figure>
</div>


<p>Open the settings and go to the device sections and remove the serial device you created before. To set the correct network settings go to <strong>Network section</strong> and select Host only. Finally you click on safe and end the configuration.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2006" height="1440" src="https://digitaldocblog.com/wp-content/uploads/2023/04/16-remove-serial-device-1.png" alt="Remove serial device" class="wp-image-202" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/16-remove-serial-device-1.png 2006w, https://digitaldocblog.com/wp-content/uploads/2023/04/16-remove-serial-device-1-300x215.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/16-remove-serial-device-1-1024x735.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/16-remove-serial-device-1-768x551.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/16-remove-serial-device-1-1536x1103.png 1536w" sizes="auto, (max-width: 2006px) 100vw, 2006px" /><figcaption class="wp-element-caption">Remove serial device</figcaption></figure>
</div>


<h3 class="wp-block-heading">Check the network configuration</h3>



<p>Run both virtual machines in UTM, Robot1VulnerableHost and the Attacker machine KaliLinux. When you run Robot1VulnerableHost you see the logon screen after the machine has beefed. Of course you don’t have login credentials because you want to hack into it instead of logging in. This is different on your KaliLinux. Here you created an account during the installation. </p>



<p>Log into KaliLinux and open a Terminal. Check the ip address from KaliLinux. The ip address is <strong>192.168.128.5</strong>.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2558" height="1440" src="https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali.png" alt="Check the IP Addess of KaliLinux" class="wp-image-210" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali.png 2558w, https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali-300x169.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali-1024x576.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali-768x432.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali-1536x865.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali-2048x1153.png 2048w, https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali-1140x641.png 1140w, https://digitaldocblog.com/wp-content/uploads/2023/04/17-check-IP-Kali-540x304.png 540w" sizes="auto, (max-width: 2558px) 100vw, 2558px" /><figcaption class="wp-element-caption">Check the IP Addess of KaliLinux</figcaption></figure>
</div>


<p>Then discover the network 192.168.128.0/24 to find other hosts in the same subnet. Type the following command.</p>



<pre class="wp-block-code"><code>
kaliLinux % netdiscover -r 192.168.128.0/24

</code></pre>



<p>You see 2 hosts in your subnet <strong>192.168.128.1</strong> which is the bridge between your local host network and your vlan and you see <strong>192.168.128.6</strong> which is Robot1VulnerableHost on the same subnet.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2558" height="1440" src="https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan.png" alt="Discover the virtual network" class="wp-image-212" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan.png 2558w, https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan-300x169.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan-1024x576.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan-768x432.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan-1536x865.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan-2048x1153.png 2048w, https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan-1140x641.png 1140w, https://digitaldocblog.com/wp-content/uploads/2023/04/18-netdiscover-vlan-540x304.png 540w" sizes="auto, (max-width: 2558px) 100vw, 2558px" /><figcaption class="wp-element-caption">Discover the virtual network</figcaption></figure>
</div>


<p>You can ping Robot1VulnerableHost but you cannot ping the host computer which is my Mac on subnet 192.168.0.0/24 with the ip address 192.168.0.38. So the bridge does not route traffic from your vlan into the local network which is what we need. The guests can talk to each other but they can not talk to the outside world. </p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2558" height="1440" src="https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1.png" alt="Guests can talk with each other but not with the host" class="wp-image-207" srcset="https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1.png 2558w, https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1-300x169.png 300w, https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1-1024x576.png 1024w, https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1-768x432.png 768w, https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1-1536x865.png 1536w, https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1-2048x1153.png 2048w, https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1-1140x641.png 1140w, https://digitaldocblog.com/wp-content/uploads/2023/04/19-ping-1-540x304.png 540w" sizes="auto, (max-width: 2558px) 100vw, 2558px" /><figcaption class="wp-element-caption">Guests can talk with each other but not with the host</figcaption></figure>
</div>


<p>Then I ping the virtual guests from my Mac. Here you see the host can talk to them. So the network setup of the Hacking Lab is correct.  </p>



<pre class="wp-block-code"><code>patrick % ping 192.168.128.5
PING 192.168.128.5 (192.168.128.5): 56 data bytes
64 bytes from 192.168.128.5: icmp_seq=0 ttl=64 time=0.979 ms
64 bytes from 192.168.128.5: icmp_seq=1 ttl=64 time=0.568 ms
64 bytes from 192.168.128.5: icmp_seq=2 ttl=64 time=0.811 ms
64 bytes from 192.168.128.5: icmp_seq=3 ttl=64 time=0.718 ms
^C

patrick % ping 192.168.128.6
PING 192.168.128.6 (192.168.128.6): 56 data bytes
64 bytes from 192.168.128.6: icmp_seq=0 ttl=64 time=2.663 ms
64 bytes from 192.168.128.6: icmp_seq=1 ttl=64 time=2.298 ms
64 bytes from 192.168.128.6: icmp_seq=2 ttl=64 time=2.330 ms
64 bytes from 192.168.128.6: icmp_seq=3 ttl=64 time=2.310 ms
64 bytes from 192.168.128.6: icmp_seq=4 ttl=64 time=2.318 ms
^C

patrick %

</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Role Based Access Control using express-session in a node-js app</title>
		<link>https://digitaldocblog.com/webdesign/role-based-access-control-using-express-session-in-a-node-js-app/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 04 May 2021 12:00:00 +0000</pubDate>
				<category><![CDATA[Web-Development]]></category>
		<category><![CDATA[Webdesign]]></category>
		<category><![CDATA[Webserver]]></category>
		<category><![CDATA[Express.js]]></category>
		<category><![CDATA[HTML]]></category>
		<category><![CDATA[Java Script]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[MongoDB]]></category>
		<category><![CDATA[Mongoose]]></category>
		<category><![CDATA[Node.js]]></category>
		<category><![CDATA[NPM Node package manager]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=141</guid>

					<description><![CDATA[In this article I refer to an application I created a couple of months ago. It&#8217;s about a booking system with which players can book ice-hockey trainings in different locations,&#8230;]]></description>
										<content:encoded><![CDATA[
<p>In this article I refer to an application I created a couple of months ago. It&#8217;s about a booking system with which players can book ice-hockey trainings in different locations, the coach can confirm participation in a training session  and a club manager can organize training sessions and bill the players for booked trainings. You can see the code on my <a href="https://github.com/prottlaender/bookingsystem" title="Node-Js Booking-System Code on GitHub Account of Patrick Rottlaender">GitHub Account</a> and read a detailed application description in the style of a user manual on my blog <a href="https://digitaldocblog.com/singleblog?article=9" title="Booking-System Application Description on Digitaldocblog Website owned by Patrick Rottlaender">Digitaldocblog</a>.</p>



<p>In my booking system I give users different roles in my app and depending on their role, the users have different authorizations. An <em>admin</em> for example is able to access more sensitive data and functionalities than a normal <em>player</em> or a <em>coach</em>. So my app must know the role of a user to assign different authorizations to the particular user.</p>



<p>Clients, usually browsers send requests the app. The app responds to requests and is solely responsible for ensuring that the client only has access to the data that are intended for it. This request and response game is based on the HTTP protocol. HTTP is a stateless network protocol and requests cannot be related to each other. Each request is isolated and unrelated to previous requests and the server has no chance to recognize clients and does therefore not know their role. </p>



<p>This problem can be solved with sessions and cookies and means that session management must be implemented in the application. The application creates a session and stores session data such as the role of a requestor in this session. The session has a unique ID and the app saves only this ID in a cookie. The cookie is transferred to the browser and stored locally there. From now on, the browser always sends this cookie with the HTTP request and thus identifies itself to the application. The application can check the role of the requestor in the stored session data and control the appropriate access.</p>



<h3 class="wp-block-heading">Basic setup of the server</h3>



<p>First we need a working Server OS. I run Linux Ubuntu in production and have written an article about the <a href="https://digitaldocblog.com/singleblog?article=10" title="Basic Setup for Node-Js Apps running on Ubuntu Linux on Digitaldocblog Website owned by Patrick Rottländer ">basic setup of a production Linux server</a> on my blog site <a href="https://digitaldocblog.com/home?currpage=1" title="Digitaldocblog Website owned by Patrick Rottlaender">Digitaldocblog</a>. Since I am going to store the sessions in a MongoDB, MongoDB must be installed on the Linux server. I use <em>MongoDB Community Edition</em> but you can also install or upgrade to the <em>MongoDB Enterprise</em> Server version. In the lower part of the article you find the instructions how to install and setup your <em>MongoDB Community Edition</em> on your Linux System. In case you want to read the original documentation go on the MongoDB site and read how to install the <a href="https://docs.mongodb.com/manual/administration/install-community/" title="MongoDB Community Edition Documentation">MongoDB Community Edition</a> for your OS.  </p>



<p>In my express application I use a number of external modules or dependencies that have to be installed for the application in order for the application to run. In the repository of the <a href="https://github.com/prottlaender/bookingsystem" title="Node-Js Booking-System Code on GitHub Account of Patrick Rottlaender">bookingsystem</a>on my <a href="https://github.com/prottlaender" title="GitHub Account of Patrick Rottlaender">GitHub account</a> you find the <a href="https://github.com/prottlaender/bookingsystem/blob/master/package.json" title="package.json file of Booking System on GitHub Account of Patrick Rottlaender">package.json</a> file which contains all the necessary dependencies. In principle, it is sufficient if you put this <em>package.json</em> file in your application main directory and install all dependencies with <code>npm install</code>. </p>



<p>Alternatively, of course, all modules can also be installed individually with </p>



<p><code>npm install &lt;module&gt; --save</code></p>



<h3 class="wp-block-heading">Session Management</h3>



<p>I discuss in the first part different code snippets in my application main file <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file of Booking System on GitHub Account of Patrick Rottlaender">booking.js</a>. The goal here is that you understand how session management is basically implemented.</p>



<pre class="wp-block-code"><code>// booking.js

// Load express module and create app
const express = require('express');
const app = express();
// Trust the first Proxy
app.set('trust proxy', 1);
// Load HTTP response header security module
const helmet = require('helmet');
// use secure HTTP headers using helmet with every request
app.use(
  helmet({
      frameguard: {
        action: "deny",
      },
      referrerPolicy: {
        policy: "no-referrer",
    },
    })
  );
// Load envy module to manage environment variables
const envy = require('envy');
const env = envy();

// Set environment variables
const port = env.port
const host = env.host
const mongodbpath = env.mongodbpath
const sessionsecret = env.sessionsecret
const sessioncookiename = env.sessioncookiename
const sessioncookiecollection = env.sessioncookiecollection

// Load server side session and cookie module
const session = require('express-session');
// Load mongodb session storage module
const connectMdbSession = require('connect-mongodb-session');
// Create MongoDB session store object
const MongoDBStore = connectMdbSession(session)
// Create new session store in mongodb
const store = new MongoDBStore({
  uri: mongodbpath,
  collection: sessioncookiecollection
});
// Catch errors in case session store creation fails
store.on('error', function(error) {
  console.log(`error store session in session store: ${error.message}`);
});
// Use session to create session and session cookie
app.use(session({
  secret: sessionsecret,
  name: sessioncookiename,
  store: store,
  resave: false,
  saveUninitialized: false,
  // set cookie to 1 week maxAge
  cookie: {
    maxAge: 1000 * 60 * 60 * 24 * 7,
    sameSite: true
  },
}));

... //further code not taken into account at this point
</code></pre>



<p>I create a server application using the <a href="https://www.npmjs.com/package/express" title="Express-Js Web-Application Framework for node-js">Express-js&nbsp;</a> Web Application Framework. Therefore I load the Express-js  module with the <code>require()</code> function and store the <code>express()</code> function in the constant <em>app</em>. Because my app is running behind a reverse proxy server I set the app to trust the first proxy server. Then I load the <a href="https://www.npmjs.com/package/helmet" title="Helmet Package for node-js">helmet module</a> to use secure response headers in my app. I configure that all browsers should deny iFrames and that my app will set no referrer in the response header. </p>



<p>I use the <a href="https://www.npmjs.com/package/envy" title="Envy module for node-js">envy module</a> in my application to manage environment variables. Therefore I load the module with <code>require()</code> and store the <code>envy()</code> function in the constant <em>env</em>. With envy you can define your environment variables in your <em>.env</em> and <em>.env.example</em> files. These files must be stored in the application main directory as explained in the envy documentation. </p>



<p>Since my booking app is a real web application running on a web server in production I can not discuss the real environment variables because of security reasons. Therefore let us see how this work and make an example <em>.env</em> file. </p>



<pre class="wp-block-code"><code>// .env

port=myport
host=myhost
mongodbpath=myexamplemongodbpath
sessionsecret=myexamplesecret
sessioncookiename=booking
sessioncookiecollection=col_sessions

</code></pre>



<p>These variables have different values in my <em>.env</em> file. In the <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a>file above I define the constant <em>env</em> for the envy function with <code>env = envy()</code> . Then I have access to the environment variables defined in my <em>.env</em> file with <em>env.&lt;variable&gt;</em>. I define constants for each variable and assign the variable from the .env file with <code>env.&lt;variable&gt;</code>. These constants can now be used as values in the code. </p>



<p>I load the <a href="https://www.npmjs.com/package/express-session" title="Node-Js Express-Session Module">express-session</a> module and the <a href="https://www.npmjs.com/package/connect-mongodb-session" title="Node-Js Connect-MongoDB-Session Module">connect-mongodb-session</a> module with the <code>require()</code> function. The session module stored in the constant <em>session</em> takes over the entire control of the session and cookie management. </p>



<p>The <em>connect-mongodb-session</em> stored in the constant <em>connectMdbSession</em> module is basically responsible for storing the session in the database. That is why we pass <em>session</em> as a parameter in the code and assign the constant <em>MongoDBstore</em>.</p>



<p><code>const MongoDBstore = connectMdbSession(session)</code> </p>



<p>With <code>new MongoDBStore</code> I create a new store object. Here I pass the <em>uri</em> of the mongodb path and the <em>collection</em> where sessions should be stored. </p>



<pre class="wp-block-code"><code>// booking.js
...

const store = new MongoDBStore({
  uri: mongodbpath,
  collection: sessioncookiecollection
});

...

</code></pre>



<p>The store object initialized in this way contains all necessary parameters to successfully store a session object in my MongoDB database.</p>



<p>After we have defined the storage of the session object, we take care of the session object itself. </p>



<p>With <code>app.use(session( {... cookie: {...} }))</code> I create a session object with various options. The session object will be created with each request and also contains a cookie object. I pass the values for <code>cookie: {...}</code> and then other options like<code>secret: sessionsecret</code>, the session object name with <code>name: sessioncookiename</code> as well as the location where the session object should be stored with <code>store: store</code>. Furthermore the session object has the option <code>saveUninitialized: false</code> and <code>resave: false</code> . </p>



<p>When the <em>saveUninitialized</em> option is set to <em>false</em> the session object is <em>not stored</em> into the store as long as the session is <strong>un-initialized</strong>. The option <code>resave: false</code> enforce that a session will <em>not be saved back</em> to the store even if the session is <strong>initialized</strong>.  So we must understand what <em>initialized</em> and <em>un-initialized</em> mean. This must be explained.</p>



<p>A browser send a request to the app. More precisely, the browser sends the request to a defined endpoint in the app. An endpoint defines a path within the app that reacts to HTTP requests and executes code. Depending on the HTTP method GET or POST, the endpoint expects that the requestor requires a document back (GET) or that the requestor wants to send data to the app (POST).</p>



<p>In the example below the browser should send a GET request to GET <em>home</em> endpoint. The endpoint render the HTML template <em>index</em> and send the HTML back to the browser. Then the request is finished. So this process, which starts with the <em>request</em> and ends with the <em>response</em> is the <em>runtime of the request</em>. </p>



<p>In the code snippet below you see 2 GET endpoints in the <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a> file one for the <em>home</em> route and another one for the <em>register</em> route.  </p>



<pre class="wp-block-code"><code>// booking.js

... //further code not taken into account at this point

// GET home route only for anonym users. Authenticated users redirected to dashboard
app.get('/', redirectDashboard, (req, res) =&gt; {
	
  console.log(req.url);
  console.log(req.session.id);
  console.log(req.session);

  res.render('index', {
      title: 'User Login Page',
    });

});

// GET register route only for anonym users. Authenticated users redirected to dashboard
app.get('/register', redirectDashboard, (req, res) =&gt; {
  
  console.log(req.url);
  console.log(req.session.id);
  console.log(req.session);

  res.render('register', {
      title: 'User Registration Page',
    });
});

... //further code not taken into account at this point


</code></pre>



<p>The code with <code>app.use (session({ ... }))</code> in my <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a> file ensures that a session object is always generated with each request. As long as a session object is not changed during the runtime of a request a separate session is created for each request and has its own session ID. The option <code>saveUninitialized: false</code> ensure that the session object will not be stored into the database. Each session object created in this way is <strong>un-initialized</strong>. </p>



<p>You can see the following output on the console for the <em>home</em> route and for the <em>register</em> route when we log the <em>path</em>, the <em>session-ID</em> and the <em>session object</em> on the console for each route.</p>



<pre class="wp-block-code"><code>/
BmbE8RVoTRcPP9nUnBm5JLE1w1mQiNyt
Session {
  cookie: {
    path: '/',
    _expires: 2021-04-24T04:27:04.265Z,
    originalMaxAge: 604800000,
    httpOnly: true,
    sameSite: true
  }
}

/register
awlPO-KpyVM51Gp6UAoeXGGmRWo-QFtP
Session {
  cookie: {
    path: '/',
    _expires: 2021-04-24T05:54:57.439Z,
    originalMaxAge: 604800000,
    httpOnly: true,
    sameSite: true
  }
}
  
</code></pre>



<p>The code of my app changes a session object during the runtime of a request by adding a data object when a user has successfully logged in. I will explain the code in detail in the next chapter but at the moment it is enough to know that. Therefore we play through the login of a user as follows.</p>



<p>The browser send a GET request to the <em>home</em> route as explained above, then the <em>index</em> template is rendered and the HTML page with the login form is sent back to the browser. During the runtime of this GET request a session object is created but the session object has not changed. We have already seen this above.</p>



<p>Then the user enters <em>email</em> and <em>password</em> in the login form and click submit. With this submit the browser send a POST request to the POST endpoint <em>/loginusers</em> and again a session object is generated for this POST request. During the runtime of the POST request, the code checks whether the transferred credentials are correct. If the credentials are correct, a data object with user data is generated and attached to the session object. Here the session object is changed during the runtime of the POST request. The existing session created with the POST request is now <strong>initialized</strong> at that moment. Because of the option <code>saveUninitialized: false</code> this session object is stored into the database store. When we look into the database store using the tool <a href="https://www.mongodb.com/products/compass" title="MongoDB Compass Management Console ">MongoDB Compass</a> we see that the entire session object has been saved into the <em>col<em>sessions</em></em> collection including the data object containing the required data of the user.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2460" height="1126" src="https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object.png" alt="Session Object saved in the *col_sessions* " class="wp-image-136" srcset="https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object.png 2460w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-300x137.png 300w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1024x469.png 1024w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-768x352.png 768w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1536x703.png 1536w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-2048x937.png 2048w" sizes="auto, (max-width: 2460px) 100vw, 2460px" /><figcaption>Session Object saved in the *col_sessions* </figcaption></figure>
</div>


<p>After the session initialization the code called by the POST endpoint redirect the request and send a new GET request to the <em>/dashboard</em> route. The code with <code>app.use(session({ ... }))</code>  is called again but now there is an initialized session existing in the store. Because of the option <code>resave: false</code> the existing session object will not be updated and dragged along unchanged with every further request.</p>



<p>You see this in the output on the console when we log the <em>path</em>, the <em>session-ID</em> and the <em>session object</em> on the console for each route. The first output on the console is created when the GET request is sent to the <em>home</em> route. Then, the second output, after the user clicked submit the POST route <em>/loginusers</em> is called and a new session object is created. You see this from the different session IDs. During the runtime of this POST request the data object is added to the session object which initializes the session. Then, the third output, the GET route <em>/dashboard</em> is called and we see the same session object ID but the session object now contain the data object with the user data.</p>



<pre class="wp-block-code"><code>/
TEAZITdX7nLWBDc8uOk2HhXIiMZO7W-4
Session {
  cookie: {
    path: '/',
    _expires: 2021-05-02T07:21:13.236Z,
    originalMaxAge: 604800000,
    httpOnly: true,
    sameSite: true
  }
}

/loginusers
gVlKut3bdEMiDHnK455FGjCi6YbPTBuZ
Session {
  cookie: {
    path: '/',
    _expires: 2021-05-02T07:21:35.202Z,
    originalMaxAge: 604800000,
    httpOnly: true,
    sameSite: true
  }
}

/dashboard
gVlKut3bdEMiDHnK455FGjCi6YbPTBuZ
Session {
  cookie: {
    path: '/',
    _expires: 2021-05-02T07:21:35.468Z,
    originalMaxAge: 604800000,
    httpOnly: true,
    secure: null,
    domain: null,
    sameSite: true
  },
  data: {
    userId: 5f716b7439777365c18639f1,
    status: 'active',
    name: 'Oskar David',
    lastname: 'Rottländer',
    email: 'oskar@test.com',
    role: 'player',
    age: 17,
    cat: 'youth'
  }
}

</code></pre>



<p>In summary, session management works as follows: A session object will be created with each request and the session object is only saved in the database when the user is logged in (<em>saveUninitialized: false</em>). As long as the user is logged in, the session object is not changed and the data of the session object in the database are not updated (<em>resave: false</em>).</p>



<p><strong>But what happens to the cookie ?</strong> This will be explained in the next chapter.</p>



<h3 class="wp-block-heading">User login</h3>



<p>When the session has been initialized the cookie containing the session ID is stored in the browser of the requestor. With every request the browser provide the cookie to authenticate the requestor. To authenticate the requestor the code <code>app.use(session({...}))</code>  is called and compare the session ID sent by the browser with the session IDs stored in the session store. If a session ID matches, the session object including the data object is attached to the request object to give the app access to the data object. Within the app we now have access to any attribute of the data object with <em>req.session.data.&lt;attribute&gt;</em>. Therefore we can now implement role based authorization by accessing the role of the requestor with <em>req.session.data.role</em> and use this information in conditions in the code to control access depending on the role of the requestor. </p>



<p>But lets start from the beginning with the login of the requestor or user as I call the requestor from now on. In order for a user to be able to login, he or she must first call the <strong>login page</strong> which can be displayed by calling up the home endpoint. </p>



<pre class="wp-block-code"><code>// booking.js

... // Code not discussed here

// Redirect GET requests from authenticated users to dashboard
const redirectDashboard = (req, res, next) =&gt; {
  if (req.session.data) {
    res.redirect('/dashboard')

  } else {
    next()

  }
}

... // Code not discussed here

// GET home route only for anonym users. Authenticated users redirected to dashboard

app.get('/', redirectDashboard, (req, res) =&gt; {

  res.render('index', {
      title: 'User Login Page',
    });

});

... // Code not discussed here

</code></pre>



<p>As you see above in the code I have first defined the middleware function <em>redirectDashboard</em>. This middleware ensure that only users who are not logged in see the login page. If we look at the code of the middleware function we can see that <em>req.session.data</em> is used in an if-condition to check whether a data object is attached to the current session object. In case the <strong>if-condition is true</strong>, the user is logged in and the request is redirected to the dashboard, but in case the <strong>if-condition is false</strong>, the user is not logged in and the <em>next()</em> function is called. </p>



<p>The GET endpoint has the <em>routingPath</em> to the home route. When a user visits the homepage of my booking application, the GET HTTP request ask for the home <em>routingPath</em>. The middleware function <em>redirectDashboard</em> is put in front of the <em>routingHandler</em> function. If the user is not logged in the <em>routingHandler</em> function render the HTML template <a href="https://github.com/prottlaender/bookingsystem/blob/master/views/index.pug" title="index.pug file">index.pug</a>and send the HTML back to the user or more precisely to the user&#8217;s browser. </p>



<p>So far so good. We now imagine a not logged in user who sees the index page in front of him or her now wants to login using his or her <em>email</em> and <em>password</em>.</p>



<p>As described above, the index page is nothing more than a login form for entering an email address and a password. When we look at the <a href="https://github.com/prottlaender/bookingsystem/blob/master/views/index.pug" title="index.pug file">index.pug</a> file we see that the form action attribute define that the form data <code>email</code> and <code>password</code> will be sent to the form handler <code>/loginusers</code> using the POST method when the Submit button is clicked.</p>



<pre class="wp-block-code"><code>...

form#loginForm.col.s12(
		method='post', 
		action='/loginusers'
		)

		input.validate(
			type='email', 
			name='email', 
			autocomplete='username' 
			required
			)
		...

		input.validate(
			type='password', 
			name='password', 
			autocomplete='current-password' 
			required
			)
		...

button.btn.waves-effect.waves-light(
		type='submit', 
		form='loginForm'
		)
...

</code></pre>



<p><strong>Note</strong>: To understand the <em>autocomplete</em> attributes of the input tags I recommend reading the documentation of the <a href="https://www.chromium.org/developers/design-documents/form-styles-that-chromium-understands" title="The Chromium Project">Chromium Project</a>. Most browsers have password management functionalities and automatically fill in the credentials after you provide a master password to unlock your local password store. By using these autocomplete attributes in login forms but also in user registration forms or change password forms you help browsers by using these <em>autocomplete</em> functions to better identify these forms.</p>



<p>When the user has entered his or her <em>email</em> and <em>password</em> in the HTML form and clicked the Submit button, the <strong>request body</strong> contain the <em>Form Data</em> attributes <em>email</em> and <em>password</em>. Then a POST HTTP request is sent via HTTPS to the POST endpoint <code>/loginusers</code> defined in my <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a> file (see above). </p>



<p>In the picture below you can see the output of the network analysis in the developer tool of the chrome browser. Here you can see that the <em>Form Data</em> are not encrypted on <strong>browser side</strong> but you also see that the POST request URL <code>/loginusers</code> is HTTPS. This mean that when the browser sent the POST request to the server these data are encrypted with SSL/TLS in transit from the browser to the server.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2002" height="612" src="https://digitaldocblog.com/wp-content/uploads/2022/08/050-POST-route-request-with-form-data.png" alt="*Form Data* not encrypted on browser side" class="wp-image-137" srcset="https://digitaldocblog.com/wp-content/uploads/2022/08/050-POST-route-request-with-form-data.png 2002w, https://digitaldocblog.com/wp-content/uploads/2022/08/050-POST-route-request-with-form-data-300x92.png 300w, https://digitaldocblog.com/wp-content/uploads/2022/08/050-POST-route-request-with-form-data-1024x313.png 1024w, https://digitaldocblog.com/wp-content/uploads/2022/08/050-POST-route-request-with-form-data-768x235.png 768w, https://digitaldocblog.com/wp-content/uploads/2022/08/050-POST-route-request-with-form-data-1536x470.png 1536w" sizes="auto, (max-width: 2002px) 100vw, 2002px" /><figcaption>*Form Data* not encrypted on browser side</figcaption></figure>
</div>


<p>On the <strong>server side</strong> we have the web application behind a proxy server listening to HTTP requests addressed to the POST endpoint <code>/loginusers</code>.  This POST endpoint is an anonym POST Route which means that the <em>routingHandler</em> controller function is restricted to not logged-in users only. This makes sense because a login function must not be used by already logged in users. So already logged in users can not send data to this POST endpoint. This check is controlled by the middleware function <em>verifyAnonym</em> which is put in front of the <em>routingHandler</em>.  </p>



<p>So lets look at the relevant code snippets in <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a>.</p>



<pre class="wp-block-code"><code>// booking.js

...

// Load db controllers and db models
const userController = require('./database/controllers/userC');

...

// Verify POST requests only for anonym users
const verifyAnonym = (req, res, next) =&gt; {

  if (req.session.data) {
    var message = 'You are not authorized to perform this request because you are already logged-in !';
    res.status(400).redirect('/400badRequest?message='+message);

  } else {
    next()

  }
}

...

// Anonym POST Route
// Login user available for anonym only
app.post('/loginusers', verifyAnonym, userController.loginUser)

...

// GET bad request route render 400badRequest
app.get('/400badRequest', (req, res) =&gt; {
 
  res.status(400).render('400badRequest', {
    title: 'Bad Request',
    code: 400,
    status: 'Bad Request',
    message: req.query.message,
  })
})

...

</code></pre>



<p>At the beginning of the code I refer the constant <em>userController</em> to the user controller file <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/controllers/userC.js" title="userC.js file">userC.js</a> using the <code>require</code> method. In <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/controllers/userC.js" title="userC.js file">userC.js</a> all user functions are defined to control user related operations.</p>



<p><strong>Note</strong>: When you look into the <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/controllers/userC.js" title="userC.js file">userC.js</a> file you see that we export modules using <code>module.exports = {...}</code>. Using this directive we export in fact an object with various attributes and the values of these attributes are functions. So with  <code>module.exports = { loginUser: function(...) ...}</code> we export the object including the attribute <em>loginUser</em> which contains a function as value. So when we refer the constant  <em>userController</em> using the <code>require()</code> function in the <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a> file we store the complete exported object with all its attributes to the <em>userController</em> constant. Now we have access to any attribute of the exported object from <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/controllers/userC.js" title="userC.js file">userC.js</a> file with <em>userController.&lt;attribute&gt;</em>. Because the attributes are in fact functions we call these functions with this statement.  </p>



<p>In the <em>verifyAnonym</em> function <em>req.session.data</em> is used in the if-condition to check whether a data object is attached to the current session object. In case the <strong>if-condition is true</strong>, the user is already logged-in and is redirected to the Bad Request GET endpoint <code>/400badRequest</code> which is the standard route in my application to show the user that something went wrong. The user can see what went wrong from a message that has been attached to the request using the request parameter <code>?message=+message</code>. In case <strong>the if-condition is false</strong>, the user is not logged-in and the <em>next()</em> function forwards the request to the <em>routingHandler</em> controller function that call the <code>loginUser</code> function using <em>userController.loginUser</em>.  This function has access to the attributes <em>email</em> and <em>password</em> of the <strong>request body</strong> with <em>req.body.email</em> and <em>req.body.password</em>. </p>



<p>So lets look at the relevant code snippets in <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/controllers/userC.js" title="userC.js file">userC.js</a> file.</p>



<pre class="wp-block-code"><code>// database/controllers/userC.js

// load the bcryptjs module
const bcrypt = require('bcryptjs');
// define hash saltrounds for password hashing
const saltRounds = 10;
// load the relevant Prototype Objects (exported from the models)
...

const User = require('../models/userM');

...

loginUser: function (req, res) {

    const inputemail = req.body.email
    const email = inputemail.toLowerCase()

    console.log(req.url);
    console.log(req.session.id);
    console.log(req.session);

    try {

      User.findOne({ email: email }, async function(error, user) {
        if (!user) {
          var message = 'User not found. Login not possible';
          res.status(400).redirect('/400badRequest?message='+message);

        } else {
          if (user._status !== 'active') {
            var message = 'Login not possible. Await User to be activated';
            res.status(400).redirect('/400badRequest?message='+message);

          } else {
              if (bcrypt.compareSync(req.body.password, user.password)) {

                var yearInMs = 3.15576e+10;
                var currentDate = new Date ()
                var currentDateMs = currentDate.getTime()
                var birthDateMs = user.birthdate.getTime()
                var age = Math.floor((currentDateMs - birthDateMs) / yearInMs)

                if (age &lt; 18) {
                  var cat = 'youth'
                } else {
                  var cat = 'adult'
                };

                var userData = {
                  userId: user._id,
                  status: user._status,
                  name: user.name,
                  lastname: user.lastname,
                  email: user.email,
                  role: user.role,
                  age: age,
                  cat: cat,
                }

                req.session.data = userData

                res.status(200).redirect('/dashboard')

              } else {
                var message = 'Login not possible. Wrong User password';
                res.status(400).redirect('/400badRequest?message='+message);
              }
          }
        }
      })

    } catch (error) {
      // if user query fail call default error function
      next(error)

    }
  // End Module
  },

...

</code></pre>



<p>In order to authenticate a user, the <em>loginUser</em> function must find a user in the user database with the same email address as the one that was sent by the browser and attached to the request body by the app. If a user was found with the email, the function must check whether the transmitted password matches the password that is stored in the database for this user. If the email and password match the user is authenticated and the login is successful, if not, the login fails. </p>



<p>Passwords are never saved in plain text. Therefore I use the <a href="https://www.npmjs.com/package/bcryptjs" title="Bcrypt-Js module for node-js">bcryptjs module</a> to hash passwords. The bcryptjs module is loaded into the code with the <code>require()</code> function and assigned to the constant <em>bcrypt</em>. We set the constant <em>saltRounds</em> to the value of 10. This is the so called cost factor in the bcrypt hashing function and controls how much time bcrypt need to calculate a single bcrypt hash. Increasing the cost factor by 1 doubles the time and the more time bycrypt need to hash the more difficult it is to brute force stored passwords.</p>



<p>Then I load the user model from <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/models/userM.js" title="userM.js file">userM.js</a> using the <code>require()</code> function and assign the constant <em>User</em>. Here at this point I have to explain the background. To do this, we also need a look at the <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/models/userM.js" title="userM.js file">userM.js</a> file. </p>



<p><strong>Note</strong>: I use MongoDB as the database and Mongoose to model the data. If you look in the file <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/models/userM.js" title="userM.js file">userM.js</a> you see that a user object is created with the function <em>new Schema()</em> and saved in the variable <em>userSchema</em>. This <em>userSchema</em> object describes a user with all its attributes. At the end of the file, the <em>mongoose.model()</em> function is used to reference the <em>userSchema</em> to the collection <em>col<em>users</em></em> in my MongoDB. This reference is assigned to the variable <em>User</em> and exported using the function <em>module.exports()</em>. With <em>User</em> I have access to the user model meaning to all user objects and attributes in my database that are stored in the <em>col<em>users</em></em> collection. So that I can use this access in the code of my <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/controllers/userC.js" title="userC.js file">userC.js</a> file I load <a href="https://github.com/prottlaender/bookingsystem/blob/master/database/models/userM.js" title="userM.js file">userM.js</a> with the <code>require()</code> function and assign the constant <em>User</em>. I can now use Mongoose functions for example to query user data from my <em>col<em>users</em></em> collection. This is exactly what we do with <em>User.findOne()</em> when we try to find a user with a certain email.</p>



<p>The actual <strong>user authentication</strong> now takes place in the <em>userFindOne()</em> function. </p>



<p>When we run <em>User.findOne()</em> we check the criteria that do not lead to a successful authentication. </p>



<ol class="wp-block-list"><li><strong>No user found</strong>: We are looking for a user object that matches the email that has been submitted. If no user object is found with that email or the user found is not active, the request is redirected to the 400badRequest route. If we have found an active user, the submitted password string is hashed with bcrypt and compared with the saved password. </li><li><strong>Wrong password</strong>: If the password comparison is not successful, the submitted password was wrong and the request is also redirected to the 400badRequest route. <br></li></ol>



<p><strong>Note</strong>: <em>User.findOne()</em> has a query object <code>{email: email}</code> and a callback function <code>async function(error, user {...})</code> as parameters. When the async function find a user with the email in the database, this async function returns a user object with all the user attributes and store this object into the <em>user</em> parameter. Within the scope of the async function I have now access to the user attributes using <em>user.&lt;attribute&gt;</em>.</p>



<p>Only in case the user with the email is found and the submitted password is correct the authentication is successful.</p>



<p>If the user is successfully authenticated, the category of the user is calculated based on the current date and the user&#8217;s birth date. Then a <em>userData</em> object is created in which various user attributes are stored. The data of the <em>userData</em> object are then attached to the session. More precisely, the object <em>data</em> is attached to the session with <em>req.session.data</em> and the value <em>userData</em> is assigned. Now the session is initialized and the session object is stored in the <em>col<em>sessions</em></em> collection of the MongoDB.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="2460" height="1126" src="https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1.png" alt="Initialized session and session object stored in the col_sessions" class="wp-image-135" srcset="https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1.png 2460w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1-300x137.png 300w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1-1024x469.png 1024w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1-768x352.png 768w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1-1536x703.png 1536w, https://digitaldocblog.com/wp-content/uploads/2022/08/070-DB-session-object-1-2048x937.png 2048w" sizes="auto, (max-width: 2460px) 100vw, 2460px" /><figcaption>Initialized session and session object stored in the col_sessions</figcaption></figure>
</div>


<p>Then the <strong>response</strong> is sent back to the browser. </p>



<p>In this response, the browser is instructed to call up a GET request to the GET endpoint <code>/dashboard</code>. The response is sent using <code>res.status(200).redirect('/dashboard')</code>. In the Response Header you see that the cookie with the name <em>booking</em> is set in the users browser using the <code>set-cookie</code> directive. The cookie only contain the session ID which has been signed and encrypted with the <em>secret</em> we provided in  <code>app.use(session( {... cookie: {...} }))</code>. </p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="1759" height="557" src="https://digitaldocblog.com/wp-content/uploads/2022/08/060-POST-route-request-with-form-data-set-cookie.png" alt="Response Header with cookie named *booking*" class="wp-image-138" srcset="https://digitaldocblog.com/wp-content/uploads/2022/08/060-POST-route-request-with-form-data-set-cookie.png 1759w, https://digitaldocblog.com/wp-content/uploads/2022/08/060-POST-route-request-with-form-data-set-cookie-300x95.png 300w, https://digitaldocblog.com/wp-content/uploads/2022/08/060-POST-route-request-with-form-data-set-cookie-1024x324.png 1024w, https://digitaldocblog.com/wp-content/uploads/2022/08/060-POST-route-request-with-form-data-set-cookie-768x243.png 768w, https://digitaldocblog.com/wp-content/uploads/2022/08/060-POST-route-request-with-form-data-set-cookie-1536x486.png 1536w" sizes="auto, (max-width: 1759px) 100vw, 1759px" /><figcaption>Response Header with cookie named *booking*</figcaption></figure>
</div>


<p>Then the browser send the GET request to the endpoint <code>/dashboard</code>. Lets have a look into the <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a> file again.</p>



<pre class="wp-block-code"><code>// booking.js 

...

// Redirect GET requests from not authenticated users to login
const redirectLogin = (req, res, next) =&gt; {
  if (!req.session.data) {
    res.redirect('/')

  } else {
    next()

  }
}

...

// GET dashboard route only for authenticated users. Anonym users redirected to home
app.get('/dashboard', redirectLogin, async (req, res) =&gt; {

  // Check admin authorization and render admin_dashboard
  if (req.session.data.role == 'admin') {

    const user_query = User.find( {} ).sort({lastname: 1, name: 1});
    var users = await user_query.exec();

    const training_query = Training.find( {} ).sort({date: 'desc'});
    var trainings = await training_query.exec();

    const location_query = Location.find( {} ).sort({location: 'desc'});
    var locations = await location_query.exec();

    const booking_query = Booking.find( {} ).sort({_booktrainingdate: 'desc'});
    var bookings = await booking_query.exec();

    const invoice_query = Invoice.find( {} ).sort({invoicedate: 'desc'});
    var invoices = await invoice_query.exec();

    res.status(200).render('admin_dashboard', {
      title: 'Admin Dashboard Page',
      name: req.session.data.name,
      lastname: req.session.data.lastname,
      role: req.session.data.role,
      data_users: users,
      data_trainings: trainings,
      data_locations: locations,
      data_bookings: bookings,
      data_invoices: invoices,

      });

  // Check player authorization and render player_dashboard
  } else if (req.session.data.role == 'player') {

    var currentDate = new Date();
    console.log('current date: ' +currentDate);

    const availabletraining_query = Training.find( { _status: 'active', date: { $gte: currentDate } } ).sort({ date: 'desc' });
    var availabletrainings = await availabletraining_query.exec();

    const booking_query = Booking.find( { _bookuseremail: req.session.data.email, _bookparticipation: { $ne: 'invoice' } } ).sort({ _booktrainingdate: 'desc' });
    var bookings = await booking_query.exec();

    const myuser_query = User.findOne( { email: req.session.data.email } );
    var myuser = await myuser_query.exec();

    const invoice_query = Invoice.find( {invoiceemail: req.session.data.email} ).sort({invoicedate: 'desc'});
    var invoices = await invoice_query .exec();

    res.status(200).render('player_dashboard', {
      title: 'Player Dashboard Page',
      name: req.session.data.name,
      lastname: req.session.data.lastname,
      role: req.session.data.role,
      email: req.session.data.email,
      data_availabletrainings: availabletrainings,
      data_bookings: bookings,
      data_myuser: myuser,
      data_myinvoices: invoices,
      });

  // Check coach authorization and render coach_dashboard
  } else if (req.session.data.role == 'coach') {
   
    var currentDate = new Date().setHours(00, 00, 00);
    console.log('currentDate: ' +currentDate);

    const training_query = Training.find( { _status: 'active', date: { $gte: currentDate } } ).sort({ date: 'asc' });
    var trainings = await training_query.exec();

    res.status(200).render('coach_dashboard', {
      title: 'Coach Dashboard Page',
      name: req.session.data.name,
      lastname: req.session.data.lastname,
      role: req.session.data.role,
      data_trainings: trainings,
      });

  } else {
    // if user not authorized as admin, player or coach end request and send response
    var message = 'You are not authorized. Access prohibited';
    res.status(400).redirect('/400badRequest?message='+message);
  }

});

...

</code></pre>



<p>As you see above in the code we have first defined the middleware function <em>redirectLogin</em>. This middleware ensure that only users who are logged in see the dashboard page. In case the <strong>if-condition is true</strong>, the user is not logged in and the request is redirected to the home route, but in case the <strong>if-condition is false</strong>, the user is logged in and the <em>next()</em> function is called. </p>



<p>The GET HTTP request ask for the dashboard <em>routingPath</em>. The middleware function <em>&nbsp;redirectLogin</em> is put in front of the <em>routingHandler</em> function. If the user is not logged in the <em>&nbsp;redirectLogin</em> middleware redirect the request to the home route. In case the user is logged-in the <em>routingHandler</em> function is called using the request object and the response object as parameters.</p>



<pre class="wp-block-code"><code>app.get('/dashboard', redirectLogin, async (req, res) =&gt; {...})
</code></pre>



<p>If we look at the Request Header of this new GET request in the browser we can see that the cookie is dragged along unchanged with the GET request to the endpoint <code>/dashboard</code>. This happens from now on with every request until the session expires or until the user logout. </p>


<div class="wp-block-image">
<figure class="aligncenter"><img loading="lazy" decoding="async" width="1751" height="590" src="https://digitaldocblog.com/wp-content/uploads/2022/08/080-GET-route-request-dashboard-with-cookie.png" alt="Cookie dragged along with GET request " class="wp-image-139" srcset="https://digitaldocblog.com/wp-content/uploads/2022/08/080-GET-route-request-dashboard-with-cookie.png 1751w, https://digitaldocblog.com/wp-content/uploads/2022/08/080-GET-route-request-dashboard-with-cookie-300x101.png 300w, https://digitaldocblog.com/wp-content/uploads/2022/08/080-GET-route-request-dashboard-with-cookie-1024x345.png 1024w, https://digitaldocblog.com/wp-content/uploads/2022/08/080-GET-route-request-dashboard-with-cookie-768x259.png 768w, https://digitaldocblog.com/wp-content/uploads/2022/08/080-GET-route-request-dashboard-with-cookie-1536x518.png 1536w" sizes="auto, (max-width: 1751px) 100vw, 1751px" /><figcaption>Cookie dragged along with GET request </figcaption></figure>
</div>


<p>And now within the <em>routingHandler</em> function we do the <strong>user authorization</strong> check. The if condition check the users role using <em>req.session.data.role</em>. Depending on the role of the user different <em>&lt;role&gt;<em>dashboard</em></em> HTML templates are rendered and for each role different HTML is sent back to the user&#8217;s browser. Various queries are executed beforehand because we need role specific data within each <em>&lt;role&gt;<em>dashboard</em></em> HTML template. The return values ​​of the different queries <code>find()</code> and <code>findOne()</code> are only executed in case one of the if conditions become true. Then in each case the return values of the queries are stored in variables. In case all if conditions are false, meaning we cannot find a user with a role like <em>admin</em>, <em>player</em> or <em>coach</em> in the database for some reason the request is redirected to the Bad Request GET endpoint <code>/400badRequest</code> using the message as request parameter that this user is not authorized.</p>



<p>Within each if condition and for each role <em>admin</em>, <em>player</em> or <em>coach</em>, we create the response object by first setting the HTTP status to the value of 200 and then using the render method to render the respective HTML template.</p>



<pre class="wp-block-code"><code>res.status(200).render('&lt;role&gt;_dashboard', {...})
</code></pre>



<p>Within the render method, we now have the option of transferring a data object with different attributes to the HTML template. Later we can access these data in the respective HTML template and use it within the HTML template. How this works is not part of this article. But of course you can take a closer look at the templates <a href="https://github.com/prottlaender/bookingsystem/blob/master/views/admin_dashboard.pug" title="admin_dashboard.pug file">admin dashboard</a>, <a href="https://github.com/prottlaender/bookingsystem/blob/master/views/player_dashboard.pug" title="player_dashboard.pug file">player dashboard</a>and <a href="https://github.com/prottlaender/bookingsystem/blob/master/views/coach_dashboard.pug" title="coach_dashboard.pug file">coach dashboard</a>on my <a href="https://github.com/prottlaender" title="GitHub Account of Patrick Rottlaender">GitHub repository</a> and you will immediately see how this works.</p>



<h3 class="wp-block-heading">Create Authorizations</h3>



<p>As I have already shown in the upper part, I work with middleware functions to control access to GET and POST endpoints in my app. Therefore these middleware functions are the Authorizations and you can find them in the code of my <a href="https://github.com/prottlaender/bookingsystem/blob/master/booking.js" title="booking.js file">booking.js</a> file.</p>



<pre class="wp-block-code"><code>// booking.js

...

// Authorizations
// Redirect GET requests from not authenticated users to login
const redirectLogin = (req, res, next) =&gt; {
  if (!req.session.data) {
    res.redirect('/')

  } else {
    next()

  }
}

// Redirect GET requests from authenticated users to dashboard
const redirectDashboard = (req, res, next) =&gt; {
  if (req.session.data) {
    res.redirect('/dashboard')

  } else {
    next()

  }
}

// Authorize POST requests only for not authenticated users
const verifyAnonym = (req, res, next) =&gt; {
  if (!req.session.data) {
    next()

  } else {
    var message = 'You are already logged-in. You are not authorized to perform this request !';
    res.status(400).redirect('/400badRequest?message='+message);

  }
}

// Authorize POST requests only for anonym and admin users
const verifyAnonymAndAdmin = (req, res, next) =&gt; {
  if (!req.session.data) {
    next()

  } else {

    if (req.session.data.role == 'admin') {
      next()

    } else {
      var message = 'You are no Admin. You are not authorized to perform this request !';
      res.status(400).redirect('/400badRequest?message='+message);

    }

  }
}

// Authorize POST requests only for admin and player users
const verifyAdminAndPlayer = (req, res, next) =&gt; {
  if (req.session.data) {
    if (req.session.data.role == 'admin') {
      next()

    } else if (req.session.data.role == 'player') {
      next()

    } else {
      var message = 'You are no Admin, no Player. You are not authorized to perform this request !';
      res.status(400).redirect('/400badRequest?message='+message);
    }

  } else {
    var message = 'You are not logged-in. You are not authorized to perform this request !';
    res.status(400).redirect('/400badRequest?message='+message);
  }

}

// Authorize POST requests only for admin users
const verifyAdmin = (req, res, next) =&gt; {
  if (req.session.data) {
    if (req.session.data.role == 'admin') {
      next()

    } else {
      var message = 'You are no Admin. You are not authorized to perform this request !';
      res.status(400).redirect('/400badRequest?message='+message);
    }

  } else {
    var message = 'You are not logged-in. You are not authorized to perform this request !';
    res.status(400).redirect('/400badRequest?message='+message);

  }
}

// Authorize POST requests only for player users
const verifyPlayer = (req, res, next) =&gt; {
  if (req.session.data) {
    if (req.session.data.role == 'player') {
      next()

    } else {
      var message = 'You are no Player. You are not authorized to perform this request !';
      res.status(400).redirect('/400badRequest?message='+message);
    }

  } else {
    var message = 'You are not logged-in. You are not authorized to perform this request !';
    res.status(400).redirect('/400badRequest?message='+message);

  }
}

// Authorize POST requests only for coach users
const verifyCoach = (req, res, next) =&gt; {
  if (req.session.data) {
    if (req.session.data.role == 'coach') {
      next()

    } else {
      var message = 'You are no Coach. You are not authorized to perform this request !';
      res.status(400).redirect('/400badRequest?message='+message);

    }

  } else {
    var message = 'You are not logged-in. You are not authorized to perform this request !';
    res.status(400).redirect('/400badRequest?message='+message);

  }
}

...

</code></pre>



<p>As I have already explained above I use <strong>redirect functions</strong> as a middleware to control access to the <strong>GET endpoints</strong> <em>home</em>, <em>register</em> and <em>dashboard</em>. These middleware functions basically control access based on whether a user is logged-in or not. The redirect function <em>redirectDashboard</em> allow only not logged-in users access to the <em>home</em> endpoint and to the <em>register</em> endpoint, while users who are already logged-in have no access and would be redirected directly to the <em>dashboard</em> route if they try to access here. The <em>redirectLogin</em> middleware function allow only logged-in users access to the <em>dashboard</em> route while not logged-in users are redirected to the login or better to the <em>home</em> endpoint.</p>



<p>In addition to  <em>redirect functions</em> I use <strong>verify functions</strong> as a middleware to control access to the <strong>POST endpoints</strong>. With the help of POST requests, data are sent via POST endpoints to the app. That is why it is particularly important to control who is allowed to send data and who is not. I use basically 5 types of POST endpoints.</p>



<p><strong>Anonym POST endpoint</strong>. I only have one endpoint here. The <em>loginusers</em> endpoint can only be called by not logged-in users. Therefore the <em>verifyAnonym&nbsp;</em> middleware is set before the <em>routingHandler</em> function to verify if the user is not logged-in. </p>



<pre class="wp-block-code"><code>// booking.js
...

// Anonym POST endpoint
// Login user available for anonym only
app.post('/loginusers', verifyAnonym, userController.loginUser)

...

</code></pre>



<p><strong>Shared POST endpoints</strong>. The <em>createusers</em> endpoint can be called by not logged-in users and Admin users. The <em>&nbsp;verifyAnonymAndAdmin&nbsp;</em> middleware is set before the <em>routingHandler</em> function to verify if the user is not logged-in or if the user that is logged-in has the role <em>admin</em>.  The <em>&nbsp;updateuseremail</em> and <em>setnewuserpassword</em> endpoints can be called only by Admin and Player users. Therefore the <em>&nbsp;verifyAdminAndPlayer&nbsp;</em> middleware is set before the <em>routingHandler</em> function to verify if the user is logged-in and if the users role is <em>admin</em> or <em>player</em>. </p>



<pre class="wp-block-code"><code>// booking.js
...

// Shared POST endpoints
// Create Users available for anonym and admin
app.post('/createusers', verifyAnonymAndAdmin, birthdateFormatValidation, userController.createUser)

// Update User-Email available for admin and player
app.post('/updateuseremail', verifyAdminAndPlayer, userController.updateUserEmail)

// Update User-Password available for admin and player
app.post('/setnewuserpassword', verifyAdminAndPlayer, userController.setNewUserPassword)

...

</code></pre>



<p><strong>Admin POST endpoints</strong>.  I have 19 endpoints here and each of these endpoint can only be called by Admin users. The <em>&nbsp;verifyAdmin</em> middleware is set before the <em>routingHandler</em> function to verify if the user is logged-in and if the users role is <em>admin</em>. </p>



<pre class="wp-block-code"><code>// booking.js
...

// Admin POST endpoints
// Admin User Management
app.post('/callupdateusers', verifyAdmin, userController.callUpdateUsers)

app.post('/updateuser', verifyAdmin, birthdateFormatValidation, userController.updateUser)

app.post('/terminateusers', verifyAdmin, userController.terminateUser)

app.post('/activateusers', verifyAdmin, userController.activateUser)

app.post('/removeusers', verifyAdmin, userController.removeUser)

// Admin Update Training
app.post('/callupdatetrainings', verifyAdmin, trainingController.callUpdateTrainings)

app.post('/updatetraining', verifyAdmin, trainingController.updateTraining)

// Admin Location Management
app.post('/createlocations', verifyAdmin, locationController.createLocation)

app.post('/callupdatelocations', verifyAdmin, locationController.callUpdateLocations)

app.post('/updatelocation', verifyAdmin, locationController.updateLocation)

app.post('/callcreatetrainings', verifyAdmin, trainingController.callCreateTrainings)

app.post('/createtraining', verifyAdmin, trainingController.createTraining)

// Admin Invoice Management
app.post('/createinvoice', verifyAdmin, invoiceController.createInvoiceUser)

app.post('/callcancelinvoice', verifyAdmin, invoiceController.callCancelInvoice)

app.post('/cancelinvoice', verifyAdmin, invoiceController.cancelInvoice)

app.post('/callpayinvoice', verifyAdmin, invoiceController.callPayInvoice)

app.post('/payinvoice', verifyAdmin, invoiceController.payInvoice)

app.post('/callrepayinvoice', verifyAdmin, invoiceController.callRePayInvoice)

app.post('/repayinvoice', verifyAdmin, invoiceController.rePayInvoice)

...

</code></pre>



<p><strong>Player POST endpoints</strong>.  I have 7 endpoints here and each of these endpoint can only be called by Player users. The <em>&nbsp;verifyPlayer</em> middleware is set before the <em>routingHandler</em> function to verify if the user is logged-in and if the users role is <em>player</em>. </p>



<pre class="wp-block-code"><code>// booking.js
...

// Player POST endpoints
// Player Booking Management
app.post('/callbooktrainings', verifyPlayer, bookingController.callBookTrainings)

app.post('/booktrainings', verifyPlayer, bookingController.bookTraining)

app.post('/bookingreactivate', verifyPlayer, bookingController.bookingReactivate)

app.post('/callcancelbookings', verifyPlayer, bookingController.callCancelBooking)

app.post('/cancelbookings', verifyPlayer, bookingController.cancelBooking)

// Player User Management
app.post('/callupdatemyuserdata', verifyPlayer, userController.callUpdateMyUserData)

app.post('/updatemyuserdata', verifyPlayer, birthdateFormatValidation, userController.updateMyUserData)

...

</code></pre>



<p><strong>Coach POST endpoints</strong>.  I have 2 endpoints here and each of these endpoint can only be called by Coach users. The <em>&nbsp;verifyCoach</em> middleware is set before the <em>routingHandler</em> function to verify if the user is logged-in and if the users role is <em>coach</em>. </p>



<pre class="wp-block-code"><code>// booking.js
...

// Coach POST endpoints
// Coach Confirmation Management
app.post('/callparticipants', verifyCoach, bookingController.callParticipants)

app.post('/callconfirmpatricipants', verifyCoach, bookingController.callConfirmPatricipants)

...

</code></pre>



<h3 class="wp-block-heading">User logout</h3>



<p>The user initiates a <strong>logout</strong> himself by clicking on the logout link in the navigation of the application. This sends a GET request to the <code>/logout</code> endpoint of the application. In this routing definition, the session is first deleted from the database using <code>req.session.destroy()</code> and then the cookie is removed from the browser and the user is redirected to the <em>200success site</em> using <code>res.status(200).clearCookie('booking').redirect()</code>.</p>



<pre class="wp-block-code"><code>// booking.js

...

// GET logout route only for authenticated users. Anonym users redirected to home
app.get('/logout', redirectLogin, (req, res) =&gt; {
  req.session.destroy(function(err) {
    if (err) {
      res.send('An err occured: ' +err.message);
    } else {
      var message = 'You have been successfully logged out';
      res.status(200).clearCookie('booking').redirect('/200success?message='+message)
    }
  });
})

...

</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Setup Ubuntu Linux ready for Node Apps</title>
		<link>https://digitaldocblog.com/web-development/setup-ubuntu-linux-ready-for-node-apps/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 26 Feb 2021 10:47:00 +0000</pubDate>
				<category><![CDATA[Database]]></category>
		<category><![CDATA[Server]]></category>
		<category><![CDATA[Web-Development]]></category>
		<category><![CDATA[Express.js]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[MongoDB]]></category>
		<category><![CDATA[Mongoose]]></category>
		<category><![CDATA[NginX]]></category>
		<category><![CDATA[Node.js]]></category>
		<category><![CDATA[NPM Node package manager]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=122</guid>

					<description><![CDATA[We have a new server in front of us and are now starting from the beginning with the preparation of the system to operate nodejs and node express applications. SSH&#8230;]]></description>
										<content:encoded><![CDATA[
<p>We have a new server in front of us and are now starting from the beginning with the preparation of the system to operate nodejs  and node express applications. </p>



<h2 class="wp-block-heading">SSH public key authentication for the root user</h2>



<p>First we create the ssh keys on the local machine. </p>



<pre class="wp-block-code"><code>PatrickMBNeu:~ patrick$ ssh-keygen -t rsa -b 4096 -C "your_email@domain.com"
Enter file in which to save the key (/home/patrick/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
PatrickMBNeu:~ patrick$ ls -l .ssh/id_*
-rw-------@ 1 patrick  staff  3434  4 Feb 14:58 .ssh/id_rsa
-rw-r--r--  1 patrick  staff   750  4 Feb 14:58 .ssh/id_rsa.pub
PatrickMBNeu:~ patrick$ 
</code></pre>



<p>We log in to the server with the <code>root</code>  user via ssh and are in the root user&#8217;s home directory and create the <code>.ssh</code>  directory and the <code>authorized_keys</code> file. The we log out from the server and install the public key from the local machine on the server. </p>



<pre class="wp-block-code"><code>PatrickMBNeu:~ patrick$ ssh root@85.134.111.90
root@85.134.111.90's password: 
Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.15.0 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
Last login: Fri Feb 19 07:10:43 2021 from 112.11.237.18
root@h2866085:~# ls -al
insgesamt 28
drwx------  3 root root 4096 Feb 19 07:10 .
drwxr-xr-x 23 root root 4096 Feb 19 06:41 ..
-rw-------  1 root root   12 Feb 19 07:10 .bash_history
-rw-r--r--  1 root root 3106 Aug 14  2019 .bashrc
drwx------  2 root root 4096 Feb 19 07:09 .cache
-rw-r--r--  1 root root  148 Aug 13  2020 .profile
-rw-r--r--  1 root root   20 Feb 19 06:37 .screenrc
root@h2866085:~# mkdir .ssh
root@h2866085:~# ls -al
insgesamt 32
drwx------  4 root root 4096 Feb 19 07:20 .
drwxr-xr-x 23 root root 4096 Feb 19 06:41 ..
-rw-------  1 root root   12 Feb 19 07:10 .bash_history
-rw-r--r--  1 root root 3106 Aug 14  2019 .bashrc
drwx------  2 root root 4096 Feb 19 07:09 .cache
-rw-r--r--  1 root root  148 Aug 13  2020 .profile
-rw-r--r--  1 root root   20 Feb 19 06:37 .screenrc
drwxr-xr-x  2 root root 4096 Feb 19 07:20 .ssh
root@h2866085:~# chmod 700 .ssh
root@h2866085:~# ls -al
insgesamt 32
drwx------  4 root root 4096 Feb 19 07:20 .
drwxr-xr-x 23 root root 4096 Feb 19 06:41 ..
-rw-------  1 root root   12 Feb 19 07:10 .bash_history
-rw-r--r--  1 root root 3106 Aug 14  2019 .bashrc
drwx------  2 root root 4096 Feb 19 07:09 .cache
-rw-r--r--  1 root root  148 Aug 13  2020 .profile
-rw-r--r--  1 root root   20 Feb 19 06:37 .screenrc
drwx------  2 root root 4096 Feb 19 07:20 .ssh
root@h2866085:~# cd .ssh
root@h2866085:~/.ssh# touch authorized_keys
root@h2866085:~/.ssh# ls -al
insgesamt 8
drwx------ 2 root root 4096 Feb 19 07:21 .
drwx------ 4 root root 4096 Feb 19 07:20 ..
-rw-r--r-- 1 root root    0 Feb 19 07:21 authorized_keys
root@h2866085:~/.ssh# chmod 600 authorized_keys
root@h2866085:~/.ssh# ls -al
insgesamt 8
drwx------ 2 root root 4096 Feb 19 07:21 .
drwx------ 4 root root 4096 Feb 19 07:20 ..
-rw------- 1 root root    0 Feb 19 07:21 authorized_keys
root@h2866085:~/.ssh# exit
Abgemeldet
Connection to 85.214.161.41 closed.

PatrickMBNeu:~ patrick$
PatrickMBNeu:~ patrick$ cat ~/.ssh/id_rsa.pub | ssh root@85.214.161.41 "cat &gt;&gt; ~/.ssh/authorized_keys"
PatrickMBNeu:~ patrick$ ssh root@85.134.111.90
root@85.214.161.41's password: 
Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.15.0 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
Last login: Fri Feb 19 07:18:57 2021 from 185.17.207.18

root@h2866085:~# cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfV32OLcC90/0CE0nDsSRZwO5XtyRRBWkgKhkd+wlIad09Vi6URGO3XkUhac2bKWe6t9DP3GWSk23ruMj8M6UV9W1Fb7ZfFW3SXhz5+pRB1v0Uy5PDdxLH1foSz0hpbubCQ0AbEWWRNfMqKC6l2tFWrOfl5AXlbmZsHTH1Th9FSoBhqs8ZH33Oovs+lchzbpmObjUNzr0Y/ZWaNjNlAxvFtt8fMHxqEz3tw7ASub2eaVcGSiNioV3GKwlzbho62AF6b+KGbQkH92P5j4+KnDQpY92Ejd55c4kfq7DcG0pXLC2e77Ci/XnpROzllcOlSjmR5fsAIuWMw7dQyePCar2seVx7WBo0/Z/jnvF0exDJprtxPLlCbFRwj1nVMlKpUsqbE8mZs0L5k7Zh2GLkGJQekYR1X7zDthJHPMLeoepKw20onuCoTkquirwYhy4xCndjZ3VYk0033Rgu13ETrCB+eXc7UrbyyJJyTTs77BQZ/deTLZcXARYU96wQoQGzlevYjyWNhn6WEjkoBc2dcIHzV0Fp3enhLhptG6imHGsvAm+1uNkXbg46hYL4WZdJxkGXOoRo+oT/deRNvzMjDgL2SMUOgSzj7U+Krw0bUCY2LpkWp0lNAuT+YsF2O/k/TEVFBfKthrJd9f/PynTR+IFiRHK7jayhBQXIWSsqI9AlJw== p.rottlaender@icloud.com
root@h2866085:~# 
</code></pre>



<h2 class="wp-block-heading">Ubuntu Version check</h2>



<p>Another action we take is to check which OS version and kernel version we are dealing with. This provides important information when we install software later. Often we have to download special software releases for the Linux version used. Type any of the following commands to find OS Name and OS Version running on your Linux server.</p>



<pre class="wp-block-code"><code>root@h2866085:~$ cat /etc/os-release

NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

root@h2866085:~$ lsb_release -a

No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 18.04.5 LTS
Release:    18.04
Codename:    bionic

root@h2866085:~$
</code></pre>



<p>To print the Linux kernel version running on your server type the following command. I am running Linux kernel version 4.15.</p>



<pre class="wp-block-code"><code>root@h2866085:~$ uname -r
4.15.0
root@h2866085:~$
</code></pre>



<h2 class="wp-block-heading">Create a user</h2>



<p>Of course, we don&#8217;t want to log in to the system with the root user all the time. It is therefore advisable to create another user with whom we can then log in and work.</p>



<p>A user can be created in Ubuntu Linux with the command <code>useradd</code> or <code>adduser</code>. I use the <code>adduser</code> command because with this perl script, in addition to creating the user in <code>/etc/passwd</code> and creating a dedicated user group in <code>/etc/group</code>, the home directory in <code>/home</code> is also created for that user and default files are copied from <code>/etc/skel</code> if necessary.</p>



<p>As root run the command <code>adduser mynewuser</code>.</p>



<pre class="wp-block-code"><code>root@h2866085:~$ adduser mynewuser

Benutzer »mynewuser« wird hinzugefügt …
Neue Gruppe »mynewuser« (1001) wird hinzugefügt …
Neuer Benutzer »mynewuser« (1001) mit Gruppe »mynewuser« wird hinzugefügt …
Persönliche Ordner »/home/mynewuser« wird erstellt …
Dateien werden von »/etc/skel« kopiert …
Geben Sie ein neues UNIX-Passwort ein: 
Geben Sie das neue UNIX-Passwort erneut ein: 
passwd: password updated successfully
Changing the user information for mynewuser
Enter the new value, or press ENTER for the default
    Full Name &#91;]: Tech User
    Room Number &#91;]: 
    Work Phone &#91;]: 
    Home Phone &#91;]: 
    Other &#91;]: This user is only for tech 
Ist diese Information richtig? &#91;J/N] J

root@h2866085:~$
</code></pre>



<p>This create the user <code>mynewuser</code>. In <code>/etc/passwd</code> you see that the user has been created with userID (1000) and a groupID (1000). In <code>/etc/group</code> you find the new group <code>mynewuser</code>. When you check the home directory you find the new home directory in <code>/home/mynewuser</code>. </p>



<pre class="wp-block-code"><code>root@h2866085:~$ grep mynewuser /etc/passwd
mynewuser:x:1000:1000::/home/mynewuser:/bin/bash
root@h2866085:~$ cat /etc/group
....
....
mynewuser:x:1000:
...
root@h2866085:~$ ls -l /home
drwxr-xr-x 2 mynewuser mynewuser 4096 Jan 23 05:31 mynewuser
root@h2866085:~$
</code></pre>



<p>The last entry in the <code>/etc/passwd</code> specifies the shell of the new user. Here the newly created user has the Bash shell. In general a Unix shell is a command processor running in a command line window. The user types commands and these commands will be executed and cause actions. For more details you can read the wiki article about <a href="https://en.wikipedia.org/wiki/Unix_shell">Unix Shells</a>.<br></p>



<p>Because there are some different shells available there might be the need to change the user&#8217;s shell which basically mean that you change the variety of available commands on your command line. To change the shell of a user type  <code>usermod --shell &lt;/path/toShell&gt; &lt;mynewuser&gt;</code>. To see which shells you can use pls. check the <code>/etc/shells</code> directory. </p>



<pre class="wp-block-code"><code>root@h2866085:~$ ls -l /etc/shells
# /etc/shells: valid login shells
/bin/sh
/bin/dash
/bin/bash
/bin/rbash
/usr/bin/screen
/bin/tcsh
/usr/bin/tcsh
root@h2866085:~$ usermod --shell /bin/sh mynewuser
root@h2866085:~$ grep mynewuser /etc/passwd
mynewuser:x:1001:1001:Tech User,,,,This user is only for tech:/home/mynewuser:/bin/sh
</code></pre>



<h2 class="wp-block-heading">Sudo configuration</h2>



<p>This newly created user can create files in his home directory and access those files. However, it cannot copy files to directories owned by root. To demonstrate this, I log in to the system with the newly generated user. I create a file in the home directory and then I try to copy this file into a directory owned by root. The error message is that I am not authorized. To do this copy you must be root. </p>



<p>A user can still execute commands as root. That&#8217;s why there is sudo. So when I try to execute the command with sudo I get the message that the user is not in the sudoers-file.</p>



<pre class="wp-block-code"><code>Patricks-MacBook-Pro:~$ ssh mynewuser@85.214.161.41
Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.15.0 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
Last login: Fri Feb  5 08:57:52 2021 from 87.161.105.46
mynewuser@h2866085:~$  touch testfile
mynewuser@h2866085:~$ ls -l
insgesamt 4
-rw-rw-r-- 1 mynewuser mynewuser 24 Jan 23 06:20 testfile
mynewuser@h2866085:~$ cp testfile /etc/nginx/sites-available/testfile
cp: reguläre Datei '/etc/nginx/sites-available/testfile' kann nicht angelegt werden: Keine Berechtigung
mynewuser@h2866085:~$ sudo cp testfile /etc/nginx/sites-available/testfile
&#91;sudo] Passwort für mynewuser: 
mynewuser ist nicht in der sudoers-Datei. Dieser Vorfall wird gemeldet.
mynewuser@h2866085:~$
</code></pre>



<p>Sudo allows the <code>root</code> user of the system to give other users on the system (or groups) the ability to run some (or all) commands as <code>root</code>. The sudo configuration is detailed in <code>/etc/sudoers</code> file. </p>



<p>In my sudo configuration file it is already preconfigured that all users in the sudo group can execute all commands as <code>root</code>.  Please note the corresponding entry <code>%sudo ALL=(ALL:ALL) ALL</code> in the <code>/etc/sudoers</code> configuration file below. </p>



<p>I want that the the user <code>mynewuser</code> should be able to execute all commands as <code>root</code>. Therefore <code>mynewuser</code> must be added to the <code>sudo</code> group. </p>



<p>To make <code>mynewuser</code> a member of the <code>sudo</code> group I use the command <code>usermod</code>.  Therefore I log in with <code>root</code> using the <code>su root</code> command. Then I run <code>usermod</code> with <code>-a</code> and <code>-G</code> option, which basically says to append a user to a Group. </p>



<p>Finally I check with <code>cat /etc/group</code> that the sudo group contain the <code>mynewuser</code> and logoff from the root account using <code>exit</code>.</p>



<pre class="wp-block-code"><code>mynewuser@h2866085:~$ su root
Passwort: 
root@h2866085:/home/mynewuser# 
root@h2866085:/home/mynewuser# cat /etc/sudoers
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults    env_reset
Defaults    mail_badpass
Defaults    secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"

# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
root    ALL=(ALL:ALL) ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL

# Allow members of group sudo to execute any command
%sudo    ALL=(ALL:ALL) ALL

# See sudoers(5) for more information on "#include" directives:

#includedir /etc/sudoers.d

root@h2866085:/home/mynewuser# usermod -a -G sudo mynewuser
root@h2866085:/home/patrick# cat /etc/group
root:x:0:
daemon:x:1:
bin:x:2:
...
sudo:x:27:mynewuser
....
root@h2866085:/home/patrick# exit
mynewuser@h2866085:~$
</code></pre>



<h2 class="wp-block-heading">Advanced Package Tool on Ubuntu Linux</h2>



<p>But before we start with software installations I need to give you some background info about Ubuntu or Debian package management. </p>



<p>Whenever you want to install software on your Ubuntu system, you can use the Advanced Package Tool or APT for short. Users interact with APT using the <code>apt</code> command. APT download the software package from a package source and then install it. </p>



<p>The software package can only be installed if the package source is known to the APT system. Therefore package sources are listed in the file <code>/etc/apt/sources.list</code> or in further files with the extension <code>.list</code> in the directory  <code>/etc/apt/sources.list.d</code>. </p>



<p>When you call <code>apt install &lt;package-name&gt;</code> on your console APT first check the package sources listed in your <code>/etc/apt/sources.list</code> file. When the system find the package on one of these package sources APT will install the software from there. </p>



<p>When the software is not available on any package sources listed in <code>/etc/apt/sources.list</code> APT check the package sources listed in the <code>.list</code>  files in your <code>/etc/apt/sources.list.d</code> directory. </p>



<p>When APT does not find the package on any package sources the software package cannot be installed until the package source is entered either in <code>/etc/apt/sources.list</code> or in a separate <code>.list</code> file in the <code>/etc/apt/sources.list.d</code> directory.</p>



<p>The entries in <code>.list</code> files regardless of whether they are in <code>/etc/apt/sources.list</code> or  <code>/etc/apt/sources.list.d/*.list</code> are structured identically. The structure of the <code>.list</code> files is divided into 4 sections. </p>



<pre class="wp-block-code"><code>Example:
deb    ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic     main restricted universe
&lt;type&gt; &lt;URI&gt;                                       &lt;archive&gt;  &lt;component&gt;
</code></pre>



<p>The above listed package source is an ftp server that contain binary packages for the Ubuntu Bionic Distribution (bionic). The Binary Packages include packages that meet the Ubuntu licensing requirements and are supported by the Ubuntu team (main), software packages that the Ubuntu developers support because of their importance, but which are not under Ubuntu licensing (restricted) and a wide range of free software (no licensing restrictions) that is not officially supported by Ubuntu (universe). </p>



<p>Universe software is maintained by the <a href="https://wiki.ubuntu.com/MOTU">Masters of the Universe</a> (MOTU Developers).  If a MOTU Developer want a software to be included in the Ubuntu package sources, this Developer must suggest the package for the Universe. MOTUs also maintain Multiverse software. Multiverse software is also managed by the MOTUs but the difference is that Multiverse software is not free Software. Multiverse software is subject to licensing restrictions. More details can be read in the <a href="https://ubuntu.com/server/docs/package-management">Ubuntu Package Management Documentation</a> on the Ubuntu Website.<br></p>



<p>The <code>.list</code> file can be crated manually using an editor like <code>nano</code> or with the following command. Here in this example to create a mongoDB source file.</p>



<pre class="wp-block-code"><code>echo "deb &#91; arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list
</code></pre>



<p>In order to check a package source for authenticity, the GPG key of the package source provided by the manufacturer must also be downloaded and added to the Ubuntu APT keyring using the <code>apt-key add</code> command. Here in this example to add the key to verify the mongoDB source. </p>



<pre class="wp-block-code"><code>wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -
</code></pre>



<p>With <code>wget</code> I download the public key from the mongodb.org server and pipe it into the <code>apt-key add</code> command which finally add the key to the keyring.</p>



<p>Alternatively you can use the <code>add-apt-repository</code> utility or script to automate package source entries in your <code>/etc/apt/sources.list</code> file. From Debian 8 on  <code>add-apt-repository</code> is part of the Debian Package <code>software-properties-common</code> which must be installed in case you want to use this utility for your package source management.</p>



<pre class="wp-block-code"><code>apt-get update
apt-get install software-properties-common
</code></pre>



<p>Using <code>add-apt-repository</code> the package source will be appended to the <code>/etc/apt/sources.list</code>file. No separate file will be created in <code>/etc/apt/sources.list.d</code> directory. Here is the example how to add the package source for the mongoDB. </p>



<pre class="wp-block-code"><code>sudo add-apt-repository 'deb &#91;arch=amd64] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse'
</code></pre>



<p>The utility <code>add-apt-repository</code> can also be used to install software from Personal Package Archives (PPA). PPAs are a special service for MOTU Developers (<a href="https://wiki.ubuntu.com/MOTU">Masters of the Universe</a>) to provide Ubuntu packages that are built and published with <a href="https://launchpad.net">Launchpad</a>. When you use <code>add-apt-repository</code> to install a PPA a new <code>.list</code> file will be created in your <code>/etc/apt/sources.list.d</code> directory. </p>



<pre class="wp-block-code"><code>add-apt-repository ppa:user/ppa-name
</code></pre>



<p>When you use <code>add-apt-repository</code> the PPA&#8217;s key is automatically fetched and added to the keyring. You can read more about PPA packaging on the <a href="https://help.launchpad.net/Packaging/PPA">launchpad help page</a>.</p>



<p>You should not use PPA software in production environments. The reason is that PPA software is often not maintained regularly by the MOTUs on launchpad. In general MOTUs publish their first versions on launchpad and then integrate them into the main Ubuntu Universe or Multiverse sources. So read the documentation and if possible try to install your production packages from main sources. You can see this in the following example when I install nodejs and npm from a main source. </p>



<h2 class="wp-block-heading">Nodejs and NPM Installation</h2>



<p>To run a node application you definitely need the node platform nodejs and the node package manager npm. </p>



<p>Go to the <a href="https://nodejs.org/en/download/">nodejs.org download page</a> to find out the latest version of node to install. I recommend to install the latest LTS version (<a href="https://nodejs.org/en/about/releases/">Long Term Support</a>) which is at the time of writing this document node version 14.15.5 including npm in version 6.14.11. </p>



<p>When you scroll down on the nodejs.org download page you see the install options. I prefer to install <a href="https://nodejs.org/en/download/package-manager/">nodejs via package manager</a>. Here you select your distribution so I select Debian and Ubuntu based Linux distributions. The nodejs binaries are available on <a href="https://github.com/nodesource/distributions/blob/master/README.md">GitHub nodesource repository</a>.  I choose Debian and Ubuntu based distributions (deb) <a href="https://github.com/nodesource/distributions/blob/master/README.md#debmanual">manual installation</a>.<br></p>



<p>Since I did not installed nodejs via PPA before I can skip the first step described here in the documentation. So I start with the import of the package signing key to ensure that apt can verify the new node source.</p>



<pre class="wp-block-code"><code>$ wget --quiet -O - https://deb.nodesource.com/gpgkey/nodesource.gpg.key | sudo apt-key add -
</code></pre>



<p>The key ID at the time of writing this document is <code>1655A0AB68576280</code>.</p>



<p>You can check the imported key using the apt-key command.</p>



<pre class="wp-block-code"><code>$ sudo apt-key fingerprint ABF5BD827BD9BF62
</code></pre>



<p>The output should be.</p>



<pre class="wp-block-code"><code>pub   rsa4096 2014-06-13 &#91;SC]
      9FD3 B784 BC1C 6FC3 1A8A  0A1C 1655 A0AB 6857 6280
uid        &#91; unbekannt] NodeSource &lt;gpg@nodesource.com&gt;
sub   rsa4096 2014-06-13 &#91;E]
</code></pre>



<p>Then I create the <code>nodesource.list</code> file in my  <code>/etc/apt/sources.list.d</code> directory to make the nodejs package source known to apt. I create 2 entries in that file one for the nodejs debian binaries (deb) and one for the nodejs sources (deb-src).</p>



<pre class="wp-block-code"><code># Replace $VERSION with Node.js Version you want to install: i.e. node_14.x
# $VERSION=node_14.x
# Replace $DISTRO with the output of the command lsb_release -s -c
# $DISTRO=bionic
$ echo "deb https://deb.nodesource.com/$VERSION $DISTRO main" | sudo tee /etc/apt/sources.list.d/nodesource.list
$ echo "deb-src https://deb.nodesource.com/$VERSION $DISTRO main" | sudo tee -a /etc/apt/sources.list.d/nodesource.list
</code></pre>



<p>Then I update the source package list and install.</p>



<pre class="wp-block-code"><code>$ sudo apt-get update
$ sudo apt-get install nodejs
</code></pre>



<p>Here is my manual installation.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/apt$ ls -l sources.list.d
insgesamt 0
patrick@h2866085:/etc/apt$ apt-key list
/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg
------------------------------------------------------
pub   rsa4096 2012-05-11 &#91;SC]
      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32
uid        &#91; unbekannt] Ubuntu Archive Automatic Signing Key (2012) &lt;ftpmaster@ubuntu.com&gt;

/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg
------------------------------------------------------
pub   rsa4096 2012-05-11 &#91;SC]
      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092
uid        &#91; unbekannt] Ubuntu CD Image Automatic Signing Key (2012) &lt;cdimage@ubuntu.com&gt;

/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg
------------------------------------------------------
pub   rsa4096 2018-09-17 &#91;SC]
      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C
uid        &#91; unbekannt] Ubuntu Archive Automatic Signing Key (2018) &lt;ftpmaster@ubuntu.com&gt;

patrick@h2866085:~$ sudo wget --quiet -O - https://deb.nodesource.com/gpgkey/nodesource.gpg.key | sudo apt-key add -
&#91;sudo] Passwort für patrick: 
OK
patrick@h2866085:~$ apt-key list
/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2014-06-13 &#91;SC]
      9FD3 B784 BC1C 6FC3 1A8A  0A1C 1655 A0AB 6857 6280
uid        &#91; unbekannt] NodeSource &lt;gpg@nodesource.com&gt;
sub   rsa4096 2014-06-13 &#91;E]

/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg
------------------------------------------------------
pub   rsa4096 2012-05-11 &#91;SC]
      790B C727 7767 219C 42C8  6F93 3B4F E6AC C0B2 1F32
uid        &#91; unbekannt] Ubuntu Archive Automatic Signing Key (2012) &lt;ftpmaster@ubuntu.com&gt;

/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg
------------------------------------------------------
pub   rsa4096 2012-05-11 &#91;SC]
      8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092
uid        &#91; unbekannt] Ubuntu CD Image Automatic Signing Key (2012) &lt;cdimage@ubuntu.com&gt;

/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg
------------------------------------------------------
pub   rsa4096 2018-09-17 &#91;SC]
      F6EC B376 2474 EDA9 D21B  7022 8719 20D1 991B C93C
uid        &#91; unbekannt] Ubuntu Archive Automatic Signing Key (2018) &lt;ftpmaster@ubuntu.com&gt;

patrick@h2866085:~$ sudo apt-key fingerprint 1655A0AB68576280
pub   rsa4096 2014-06-13 &#91;SC]
      9FD3 B784 BC1C 6FC3 1A8A  0A1C 1655 A0AB 6857 6280
uid        &#91; unbekannt] NodeSource &lt;gpg@nodesource.com&gt;
sub   rsa4096 2014-06-13 &#91;E]

patrick@h2866085:~$ lsb_release -s -c
bionic
patrick@h2866085:~$ sudo echo "deb https://deb.nodesource.com/node_14.x bionic main" | sudo tee /etc/apt/sources.list.d/nodesource.list
deb https://deb.nodesource.com/node_14.x bionic main
patrick@h2866085:~$ sudo echo "deb-src https://deb.nodesource.com/node_14.x bionic main" | sudo tee -a /etc/apt/sources.list.d/nodesource.list
deb-src https://deb.nodesource.com/node_14.x bionic main
patrick@h2866085:~$ ls -l /etc/apt/sources.list.d
insgesamt 4
-rw-r--r-- 1 root root 110 Feb 20 08:14 nodesource.list
patrick@h2866085:~$ sudo cat /etc/apt/sources.list.d/nodesource.list
deb https://deb.nodesource.com/node_14.x bionic main
deb-src https://deb.nodesource.com/node_14.x bionic main
patrick@h2866085:~$ sudo apt-get update
OK:1 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic InRelease
Holen:2 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-updates InRelease &#91;88,7 kB]
Holen:3 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-security InRelease &#91;88,7 kB]
Holen:4 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic/main Translation-de &#91;454 kB]
Holen:5 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic/restricted Translation-de &#91;2.268 B]
Holen:6 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic/universe Translation-de &#91;2.272 kB]                       
Holen:7 https://deb.nodesource.com/node_14.x bionic InRelease &#91;4.584 B]                                                  
Holen:8 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-updates/universe Sources &#91;446 kB]
Holen:9 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-updates/main amd64 Packages &#91;1.885 kB]
Holen:10 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-updates/universe amd64 Packages &#91;1.718 kB]
Holen:11 https://deb.nodesource.com/node_14.x bionic/main amd64 Packages &#91;764 B]
Holen:12 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-updates/universe Translation-en &#91;363 kB]
Holen:13 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-security/universe Sources &#91;277 kB]
Holen:14 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-security/universe amd64 Packages &#91;1.109 kB]
Holen:15 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-security/universe Translation-en &#91;248 kB]
Es wurden 8.957 kB in 3 s geholt (3.459 kB/s).                    
Paketlisten werden gelesen... Fertig
patrick@h2866085:~$ sudo apt-get install nodejs
patrick@h2866085:~$ node -v
v14.15.5
patrick@h2866085:~$ npm -v
6.14.11
patrick@h2866085:~$ 
</code></pre>



<p>Basically we installed npm in a bundle together with nodejs using the  <code>apt</code> package manager. But npm itself is also an additional package manager. We now have <code>apt</code> and <code>npm</code> as package managers on our system. </p>



<p>We need npm to have an additional source for software available on <a href="https://www.npmjs.com/">npmjs.com</a>.  We will for example install PM2 via npm (see next chapter) but but in particular we need npm to add local software packages or dependencies to our node application projects. So when we want to develop a node app based on the expressjs framework we will install express locally in our project directory. But this will be part of a separate tutorial. </p>



<p>When you ask npm to find outdated packages you can run the <code>npm outdated</code> command (using the -g option to show only global packages). </p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ npm outdated -g --depth=0
Package  Current   Wanted  Latest  Location
npm      6.14.11  6.14.11   7.5.4  global
</code></pre>



<p>As you see from an npm point of view the latest version of npm is 7.5.4. But npm know that we installed npm together as a bundle with nodejs via <code>apt</code> and in the apt package sources we say that our wanted version is 14.15.5 (see above). This is a bit confusing but basically <code>npm outdated</code> tell us that that there is a higher npm version available (latest) but on our system we are up to date as the current version and the one we defined in our package sources (wanted) are equal. </p>



<p>We can update npm together only together with the nodejs update which must be initiated with <code>apt-update</code>. And here you see the packages are all up to date.</p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ sudo apt update
OK:1 https://deb.nodesource.com/node_14.x bionic InRelease
OK:2 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic InRelease
OK:3 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-updates InRelease
OK:4 ftp://ftp.stratoserver.net/pub/linux/ubuntu bionic-security InRelease
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.       
Statusinformationen werden eingelesen.... Fertig
Alle Pakete sind aktuell.
patrick@h2866085:~$  
</code></pre>



<h2 class="wp-block-heading">PM2 Installation with NPM</h2>



<p>Go to <a href="https://www.npmjs.com/">npmjs.com</a> and search for PM2 Process Manager. You find the <a href="https://www.npmjs.com/package/pm2">PM2 package</a> on npmjs.com. Follow the install instructions.</p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ sudo npm install pm2 -g
patrick@h2866085:~$ pm2 -v
                        -------------

__/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
 _\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
  _\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
   _\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
    _\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
     _\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
      _\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
       _\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
        _\///______________\///______________\///__\///////////////__

                          Runtime Edition

        PM2 is a Production Process Manager for Node.js applications
                     with a built-in Load Balancer.

                Start and Daemonize any application:
                $ pm2 start app.js

                Load Balance 4 instances of api.js:
                $ pm2 start api.js -i 4

                Monitor in production:
                $ pm2 monitor

                Make pm2 auto-boot at server restart:
                $ pm2 startup

                To go further checkout:
                http:&#47;&#47;pm2.io/
                        -------------
&#91;PM2] Spawning PM2 daemon with pm2_home=/home/patrick/.pm2
&#91;PM2] PM2 Successfully daemonized
4.5.4
patrick@h2866085:~$ 
</code></pre>



<h2 class="wp-block-heading">Nano Installation with apt</h2>



<p>Nano is part of the Ubuntu 18.04 bionic main packages that are provided via the standard ftp package sources from my provider at stratoserver.net. These standard ftp package sources are in my <code>/etc/apt/sources.list</code> file and can be installed with <code>apt</code>. </p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ sudo apt install nano
patrick@h2866085:~$
</code></pre>



<h2 class="wp-block-heading">Nginx Installation with apt</h2>



<p>Also Nginx is a package that is available under the standard Ubuntu main sources. Nginx is part of the Ubuntu 18.04 bionic main packages provided via the standard ftp package sources from my provider at stratoserver.net. Nginx can be installed from these sources using the <code>apt</code> command. </p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ sudo apt install nginx
patrick@h2866085:~$
</code></pre>



<p>After the installation is complete the Nginx configuration files are in <code>/etc/nginx</code> directory.<br></p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ ls -l /etc/nginx
insgesamt 64
drwxr-xr-x 2 root root 4096 Jan 10  2020 conf.d
-rw-r--r-- 1 root root 1077 Apr  6  2018 fastcgi.conf
-rw-r--r-- 1 root root 1007 Apr  6  2018 fastcgi_params
-rw-r--r-- 1 root root 2837 Apr  6  2018 koi-utf
-rw-r--r-- 1 root root 2223 Apr  6  2018 koi-win
-rw-r--r-- 1 root root 3957 Apr  6  2018 mime.types
drwxr-xr-x 2 root root 4096 Jan 10  2020 modules-available
drwxr-xr-x 2 root root 4096 Feb 20 12:46 modules-enabled
-rw-r--r-- 1 root root 1482 Apr  6  2018 nginx.conf
-rw-r--r-- 1 root root  180 Apr  6  2018 proxy_params
-rw-r--r-- 1 root root  636 Apr  6  2018 scgi_params
drwxr-xr-x 2 root root 4096 Feb 20 12:46 sites-available
drwxr-xr-x 2 root root 4096 Feb 20 12:46 sites-enabled
drwxr-xr-x 2 root root 4096 Feb 20 12:46 snippets
-rw-r--r-- 1 root root  664 Apr  6  2018 uwsgi_params
-rw-r--r-- 1 root root 3071 Apr  6  2018 win-utf
patrick@h2866085:~$ 
</code></pre>



<p>Then I edit the <code>/etc/nginx.conf</code> as follows.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx$ sudo cat nginx.conf
##
# nginx.conf
##

user www-data; 
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
worker_connections 768;
}

http {

##
# Basic Settings
##

sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Timeout Settings
##
keepalive_timeout  30s; 
keepalive_requests 30;
send_timeout       30s;
##
# SSL Settings
##

ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;

##
# Gzip Settings
##

gzip on;
gzip_vary on;
gzip_comp_level 2;
gzip_min_length  1000;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
gzip_disable "MSIE &#91;4-6] \."; 
##
# Virtual Host Configs
##

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

patrick@h2866085:/etc/nginx$
</code></pre>



<p>In the directory <code>/etc/nginx/sites-available</code> you find the sever configuration files and in <code>/etc/nginx/sites-enabled</code> you find the sym links to the sever configuration files that are enabled on your nginx server. </p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx$ ls -l sites-available
insgesamt 4
-rw-r--r-- 1 root root 2416 Apr  6  2018 default
patrick@h2866085:/etc/nginx$ ls -l sites-enabled
insgesamt 0
lrwxrwxrwx 1 root root 34 Feb 20 12:46 default -&gt; /etc/nginx/sites-available/default
patrick@h2866085:/etc/nginx$ 
</code></pre>



<p>First we deactivate the default site by removing the default symlink in <code>/etc/nginx/sites-enabled</code>. </p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx$ cd sites-enabled
patrick@h2866085:/etc/nginx/sites-enabled$ ls -l
insgesamt 0
lrwxrwxrwx 1 root root 34 Feb 20 12:46 default -&gt; /etc/nginx/sites-available/default
patrick@h2866085:/etc/nginx/sites-enabled$ sudo unlink default
&#91;sudo] Passwort für patrick: 
patrick@h2866085:/etc/nginx/sites-enabled$ ls -l
insgesamt 0
patrick@h2866085:/etc/nginx/sites-enabled$ 
</code></pre>



<p>I want to use my nginx server as reverse proxy server for node application servers running on the localhost. Therefore I create a new file <code>prod-reverse-proxy</code> in the directory <code>sites-available</code> .</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx/sites-enabled$ cd ..
patrick@h2866085:/etc/nginx$ 
patrick@h2866085:/etc/nginx$ cd sites-available
patrick@h2866085:/etc/nginx/sites-available$ ls -l
insgesamt 4
-rw-r--r-- 1 root root 2416 Apr  6  2018 default
patrick@h2866085:/etc/nginx/sites-available$ sudo touch prod-reverse-proxy
patrick@h2866085:/etc/nginx/sites-available$ ls -l
insgesamt 4
-rw-r--r-- 1 root root 2416 Apr  6  2018 default
-rw-r--r-- 1 root root    0 Feb 21 08:31 prod-reverse-proxy
patrick@h2866085:/etc/nginx/sites-available$ 
</code></pre>



<p>Then I put the following content in the file <code>prod-reverse-proxy</code> and link this file into <code>/etc/nginx/sites-enabled</code>.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx/sites-available$ sudo nano prod-reverse-proxy
server {
        listen 80;
        listen &#91;::]:80;
        server_name digitaldocblog.com www.digitaldocblog.com;

        access_log /var/log/nginx/prod-reverse-access.log;
        error_log /var/log/nginx/prod-reverse-error.log;

        location / {

	proxy_set_header HOST $host;
</code></pre>



<p>proxy_set_header X-Forwarded-Proto $scheme;</p>



<p>proxy_set_header X-Real-IP $remote_addr;</p>



<p>proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;</p>



<p></p>



<pre class="wp-block-code"><code>	proxy_pass http://127.0.0.1:3000;


  }
}
patrick@h2866085:/etc/nginx/sites-available$ sudo ln -s /etc/nginx/sites-available/prod-reverse-proxy /etc/nginx/sites-enabled/prod-reverse-proxy
patrick@h2866085:/etc/nginx/sites-available$ cd ..
patrick@h2866085:/etc/nginx$ ls -l sites-enabled
insgesamt 0
lrwxrwxrwx 1 root root 50 Feb 21 08:37 prod-reverse-proxy -&gt; /etc/nginx/sites-available/prod-reverse-proxy
patrick@h2866085:/etc/nginx$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
patrick@h2866085:/etc/nginx$ 
</code></pre>



<p>The production proxy server is running under the domain server names <strong>digitaldocblog.com</strong> and <strong>www.digitaldocblog.com</strong> and is listening on <strong>localhost port 80</strong>. This production proxy server pass all HTTP traffic from port 80 to a server running on localhost port 3000 (127.0.0.1:3000). Nginx configuration test was tested successful after using <code>nginx -t</code>. </p>



<p>The following commands can be used to check, start and stop the nginx server.</p>



<pre class="wp-block-code"><code>sudo systemctl status nginx

sudo systemctl start nginx 

sudo systemctl stop nginx 

sudo systemctl restart nginx
</code></pre>



<p>I restart the nginx server and then check the status. The basic configuration is now complete.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx$ sudo systemctl restart nginx
patrick@h2866085:/etc/nginx$ sudo systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2021-02-21 08:53:31 CET; 9s ago
     Docs: man:nginx(8)
  Process: 29759 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
  Process: 29761 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
  Process: 29760 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
 Main PID: 29762 (nginx)
    Tasks: 5 (limit: 60)
   CGroup: /system.slice/nginx.service
           ├─29762 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
           ├─29763 nginx: worker process
           ├─29764 nginx: worker process
           ├─29765 nginx: worker process
           └─29766 nginx: worker process

Feb 21 08:53:31 h2866085.stratoserver.net systemd&#91;1]: Starting A high performance web server and a reverse proxy server...
Feb 21 08:53:31 h2866085.stratoserver.net systemd&#91;1]: Started A high performance web server and a reverse proxy server.
patrick@h2866085:/etc/nginx$ 
</code></pre>



<h2 class="wp-block-heading">Letsencrypt SSL Certificate</h2>



<p>To run your server with HTTPS you must install a certificate from an official Centification Authority (CA). <a href="https://letsencrypt.org/">Letsencrypt</a> is such a CA where you can get free certificates. Letsencrypt recommends the use of <a href="https://certbot.eff.org/">certbot</a> for easy creation and management of domain certificates.</p>



<p>On the certbot site you can select the webserver and your operating system. I choose nginx and Ubuntu 18.04 LTS bionic and get to a website with the <a href="https://certbot.eff.org/lets-encrypt/ubuntubionic-nginx">install instructions</a>. Since I am not interested in installing certbot with snap, I choose alternate installation instructions and get to the website with the install instructions of the <a href="https://certbot.eff.org/docs/install.html#operating-system-packages">operating system packages</a>.</p>



<p>I must Install certbot and the certbot nginx plugin with <code>apt</code>.</p>



<pre class="wp-block-code"><code>$ sudo apt update
$ sudo apt-get install certbot
$ sudo apt-get install python-certbot-nginx
</code></pre>



<p>Then I run <code>certbot</code> <code>--``nginx</code> to request the letsencrypt certificate for my domain and domain servers.</p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): p.rottlaender@icloud.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https:&#47;&#47;acme-v02.api.letsencrypt.org/directory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(A)gree/(C)ancel: A

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: digitaldocblog.com
2: www.digitaldocblog.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for digitaldocblog.com
http-01 challenge for www.digitaldocblog.com
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/prod-reverse-proxy
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/prod-reverse-proxy

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number &#91;1-2] then &#91;enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/prod-reverse-proxy
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/prod-reverse-proxy

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled https://digitaldocblog.com and
https://www.digitaldocblog.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=digitaldocblog.com
https://www.ssllabs.com/ssltest/analyze.html?d=www.digitaldocblog.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/digitaldocblog.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/digitaldocblog.com/privkey.pem
   Your cert will expire on 2021-05-24. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

patrick@h2866085:~$ 
</code></pre>



<p>Then I check the new directory and the files in <code>/etc/letsencrypt</code>.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/letsencrypt$ sudo ls -l live
insgesamt 4
drwxr-xr-x 2 root root 4096 Feb 23 06:33 digitaldocblog.com
patrick@h2866085:/etc/letsencrypt$ sudo ls -l live/digitaldocblog.com
insgesamt 4
lrwxrwxrwx 1 root root  42 Feb 23 06:33 cert.pem -&gt; ../../archive/digitaldocblog.com/cert1.pem
lrwxrwxrwx 1 root root  43 Feb 23 06:33 chain.pem -&gt; ../../archive/digitaldocblog.com/chain1.pem
lrwxrwxrwx 1 root root  47 Feb 23 06:33 fullchain.pem -&gt; ../../archive/digitaldocblog.com/fullchain1.pem
lrwxrwxrwx 1 root root  45 Feb 23 06:33 privkey.pem -&gt; ../../archive/digitaldocblog.com/privkey1.pem
-rw-r--r-- 1 root root 682 Feb 23 06:33 README
patrick@h2866085:/etc/letsencrypt$ 
</code></pre>



<p>I also check the file <code>prod-reverse-proxy</code> in the directory <code>/etc/nginx/sites-available</code> and see that the file has been updated by certbot. There is a new server section defined (first) an this server is now listening to port 443 ssl and the links to the ssl certificates has been added. The original server section (second) has been changed so that all traffic for hosts <strong>digitaldocblog.com</strong> and <strong>www.digitaldocblog.com</strong> will be redirected to the https version of the site (<code>return 301 https://$host$request_uri&#x1f609;</code> and requests to port 80 will be answered with 404 site not found. </p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ cd /etc/nginx/sites-available
patrick@h2866085:/etc/nginx/sites-available$ ls -l
insgesamt 12
-rw-r--r-- 1 root root 2416 Apr  6  2018 default
-rw-r--r-- 1 root root 1089 Feb 23 06:34 prod-reverse-proxy

patrick@h2866085:/etc/nginx/sites-available$ sudo cat prod-reverse-proxy
server {
server_name digitaldocblog.com www.digitaldocblog.com;

        access_log /var/log/nginx/prod-reverse-access.log;
        error_log /var/log/nginx/prod-reverse-error.log;

        location / {

	proxy_set_header HOST $host;
</code></pre>



<p>proxy_set_header X-Forwarded-Proto $scheme;</p>



<p>proxy_set_header X-Real-IP $remote_addr;</p>



<p>proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;</p>



<pre class="wp-block-code"><code>	proxy_pass http://127.0.0.1:3000;


  }
    listen &#91;::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/digitaldocblog.com/fullchain.pem; # managed by Certbot
      ssl_certificate_key /etc/letsencrypt/live/digitaldocblog.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {

    if ($host = www.digitaldocblog.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    if ($host = digitaldocblog.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

        listen 80;
        listen &#91;::]:80;
        server_name digitaldocblog.com www.digitaldocblog.com;
        return 404; # managed by Certbot
}

patrick@h2866085:/etc/nginx/sites-available$ 
</code></pre>



<p>To renew all certificates I must run the following command.</p>



<pre class="wp-block-code"><code>$ certbot renew
</code></pre>



<p>To renew all certificates automatically I attach the following line to my system crontab. </p>



<pre class="wp-block-code"><code>40 6 * * * root /usr/bin/certbot renew &gt; certrenew_log
</code></pre>



<p>The <code>certbot renew</code> command in this example run daily at 6:40 am (in the morning) as <code>root</code> and log the output in the logfile <code>certrenew_log</code> in the home directory of <code>root</code>. The command checks to see if the certificate on the server will expire within the next 30 days, and renews it if so.</p>



<p>Therefore you must edit the <code>/etc/crontab</code> file as follows. </p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc$ sudo nano crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user  command
15 * * * * root cd / &amp;&amp; run-parts --report /etc/cron.hourly
28 0 * * * root test -x /usr/sbin/anacron || ( cd / &amp;&amp; run-parts --report /etc/cron.daily )
9 5 * * 7 root test -x /usr/sbin/anacron || ( cd / &amp;&amp; run-parts --report /etc/cron.weekly )
52 0 20 * * root test -x /usr/sbin/anacron || ( cd / &amp;&amp; run-parts --report /etc/cron.monthly )

40 6 * * * root /usr/bin/certbot renew &gt; certrenew_log
#
patrick@h2866085:/etc$
</code></pre>



<h2 class="wp-block-heading">Separate Letsencrypt SSL Certificate for another proxy server</h2>



<p>I currently have 2 server files in my <code>/etc/nginx/sites-available</code> directory. </p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx/sites-available$ ls -l
insgesamt 8
-rw-r--r-- 1 root root 2416 Apr  6  2018 default
-rw-r--r-- 1 root root 1089 Feb 23 06:34 prod-reverse-proxy
patrick@h2866085:/etc/nginx/sites-available$ 
</code></pre>



<p>The file <code>default</code> is disabled. </p>



<p>The file  <code>prod-reverse-proxy</code> is enabled. The server run as reverse proxy server for the hostnames <strong>digitaldocblog.com</strong> and <strong>www.digitaldocblog.com</strong> already using SSL (see above).</p>



<p>For my hostname <code>dev.digitaldocblog.com</code>, which is a valid subdomain of domain <code>digitaldocblog.com</code> I create a separate proxy server file <code>dev-reverse-proxy</code> in the directory <code>/etc/nginx/sites-available</code>, link this file in ****<code>/etc/nginx/sites-enabled</code>, test the nginx configuration and restart nginx.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx/sites-available$ sudo touch dev-reverse-proxy
patrick@h2866085:/etc/nginx/sites-available$ ls -al
insgesamt 16
drwxr-xr-x 2 root root 4096 Feb 23 13:42 .
drwxr-xr-x 8 root root 4096 Feb 23 06:34 ..
-rw-r--r-- 1 root root 2416 Apr  6  2018 default
-rw-r--r-- 1 root root    0 Feb 23 13:42 dev-reverse-proxy
-rw-r--r-- 1 root root 1089 Feb 23 06:34 prod-reverse-proxy
patrick@h2866085:/etc/nginx/sites-available$ sudo nano dev-reverse-proxy
server {
    listen 80;
    server_name dev.digitaldocblog.com;

        access_log /var/log/nginx/dev-reverse-access.log;
        error_log /var/log/nginx/dev-reverse-error.log;

        location / {

	proxy_set_header HOST $host;
</code></pre>



<p>proxy_set_header X-Forwarded-Proto $scheme;</p>



<p>proxy_set_header X-Real-IP $remote_addr;</p>



<p>proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;</p>



<pre class="wp-block-code"><code>            	proxy_pass http://127.0.0.1:3030;
  }

}
patrick@h2866085:/etc/nginx/sites-available$ cd ..
patrick@h2866085:/etc/nginx$ sudo ln -s /etc/nginx/sites-available/dev-reverse-proxy /etc/nginx/sites-enabled
patrick@h2866085:/etc/nginx$ ls -l sites-enabled
insgesamt 0
lrwxrwxrwx 1 root root 44 Feb 23 13:51 dev-reverse-proxy -&gt; /etc/nginx/sites-available/dev-reverse-proxy
lrwxrwxrwx 1 root root 45 Feb 22 19:33 prod-reverse-proxy -&gt; /etc/nginx/sites-available/prod-reverse-proxy
patrick@h2866085:/etc/nginx$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
patrick@h2866085:/etc/nginx$ sudo systemctl restart nginx
patrick@h2866085:/etc/nginx$
</code></pre>



<p>Then I run <code>certbot --nginx</code> to request a seperate letsencrypt SSL certificate only for the subdomain <code>dev.digitaldocblog.com</code>.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/nginx$ sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: digitaldocblog.com
2: dev.digitaldocblog.com
3: www.digitaldocblog.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 2
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for dev.digitaldocblog.com
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/dev-reverse-proxy

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number &#91;1-2] then &#91;enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/dev-reverse-proxy

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled https://dev.digitaldocblog.com

You should test your configuration at:
https:&#47;&#47;www.ssllabs.com/ssltest/analyze.html?d=dev.digitaldocblog.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/dev.digitaldocblog.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/dev.digitaldocblog.com/privkey.pem
   Your cert will expire on 2021-05-24. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

patrick@h2866085:/etc/nginx$
</code></pre>



<p>Then I check the new directory and the files in <code>/etc/letsencrypt</code>.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc/letsencrypt$ pwd
/etc/letsencrypt
patrick@h2866085:/etc/letsencrypt$ sudo ls -l live
&#91;sudo] Passwort für patrick: 
insgesamt 8
drwxr-xr-x 2 root root 4096 Feb 23 13:54 dev.digitaldocblog.com
drwxr-xr-x 2 root root 4096 Feb 23 06:33 digitaldocblog.com
patrick@h2866085:/etc/letsencrypt$ sudo ls -l live/dev.digitaldocblog.com
insgesamt 4
lrwxrwxrwx 1 root root  46 Feb 23 13:54 cert.pem -&gt; ../../archive/dev.digitaldocblog.com/cert1.pem
lrwxrwxrwx 1 root root  47 Feb 23 13:54 chain.pem -&gt; ../../archive/dev.digitaldocblog.com/chain1.pem
lrwxrwxrwx 1 root root  51 Feb 23 13:54 fullchain.pem -&gt; ../../archive/dev.digitaldocblog.com/fullchain1.pem
lrwxrwxrwx 1 root root  49 Feb 23 13:54 privkey.pem -&gt; ../../archive/dev.digitaldocblog.com/privkey1.pem
-rw-r--r-- 1 root root 682 Feb 23 13:54 README
patrick@h2866085:/etc/letsencrypt$ 
</code></pre>



<h2 class="wp-block-heading">Directory Setup for node applications</h2>



<p>I run my node applications on my server in <code>/var/www/node</code> directory. The node directory is owned by <code>root</code>.</p>



<p>All files of my <strong>Node Dev Applications</strong> will be copied to <code>/var/www/node/dev</code> using the user <code>patrick</code>. This directory is owned by <code>patrick</code> . According to the configuration in <code>dev-reverse-proxy</code> all http requests coming in for server name <code>dev.digitaldocblog.com</code> will be passed to localhost 127.0.0.1 server port 3030. All dev applications must listen on localhost <code>127.0.0.1 server port 3030</code>.</p>



<p>All files of my <strong>Node Prod Application</strong> will be copied to <code>/var/www/node/prod</code> also using the user <code>patrick</code>. The prod directory is also owned by the user <code>patrick</code>. According to configuration in <code>prod-reverse-proxy</code> all http requests coming in for server names <code>digitaldocblog.com</code> and <code>www.digitaldocblog.com</code> will be passed to localhost 127.0.0.1 server port 3000. Prod application must listen on localhost <code>127.0.0.1 server port 3000</code>.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc$ cd /var
patrick@h2866085:/var$ ls
backups  cache  lib  local  lock  log  mail  opt  run  spool  tmp  www
patrick@h2866085:/var$ cd www
patrick@h2866085:/var/www$ ls -l
insgesamt 4
drwxr-xr-x 2 root root 4096 Feb 20 12:46 html
patrick@h2866085:/var/www$ sudo mkdir node
patrick@h2866085:/var/www$ ls -l
insgesamt 8
drwxr-xr-x 2 root root 4096 Feb 20 12:46 html
drwxr-xr-x 2 root root 4096 Feb 23 08:53 node
patrick@h2866085:/var/www$ cd node
patrick@h2866085:/var/www/node$ ls -l
insgesamt 0
patrick@h2866085:/var/www/node$ sudo mkdir prod
patrick@h2866085:/var/www/node$ sudo mkdir dev
patrick@h2866085:/var/www/node$ ls -l
insgesamt 8
drwxr-xr-x 2 root root 4096 Feb 23 08:56 dev
drwxr-xr-x 2 root root 4096 Feb 23 08:56 prod
patrick@h2866085:/var/www/node$ sudo chown patrick:patrick dev
patrick@h2866085:/var/www/node$ sudo chown patrick:patrick prod
patrick@h2866085:/var/www/node$ ls -al
insgesamt 16
drwxr-xr-x 4 root    root    4096 Feb 23 08:56 .
drwxr-xr-x 4 root    root    4096 Feb 23 08:53 ..
drwxr-xr-x 2 patrick patrick 4096 Feb 23 08:56 dev
drwxr-xr-x 2 patrick patrick 4096 Feb 23 08:56 prod
patrick@h2866085:/var/www/node$
</code></pre>



<h2 class="wp-block-heading">MongoDB Community Server Installation</h2>



<p>Most node applications also interact with a database. My preferred database is MongoDB. Go on the <a href="https://www.mongodb.com">MongoDB website</a> and read more about the free <a href="https://www.mongodb.com/try/download/community">MongoDB Community Server</a>. To read how to install the mongoDB in your environment go to the <a href="https://docs.mongodb.com/manual/">MongoDB Documentation Site</a>. </p>



<p>I select the <a href="https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/">installation instructions</a> of the MongoDB Community Server for my Ubuntu Linux 18.04 bionic LTS system.<br></p>



<p>Then I Import the public key used by the package management system to verify the package source.</p>



<pre class="wp-block-code"><code>$ wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -
</code></pre>



<p>Then I create the list file <code>/etc/apt/sources.list.d/mongodb-org-4.4.list</code> for my version of Ubuntu.</p>



<pre class="wp-block-code"><code>$ echo "deb &#91; arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list
</code></pre>



<p>I reload the local package database with <code>apt-get update</code> and install a specific release including all relevant component packages with <code>apt-get install</code> such as</p>



<ul class="wp-block-list"><li>server</li><li>shell</li><li>mongos</li><li>tools<br><br><br>$ sudo apt-get update<br>$ sudo apt-get install -y mongodb-org=4.4.4 mongodb-org-server=4.4.4 mongodb-org-shell=4.4.4 mongodb-org-mongos=4.4.4 mongodb-org-tools=4.4.4<br></li></ul>



<p>Mongodb can be started, restarted etc. with the following commands.</p>



<pre class="wp-block-code"><code>:$ sudo service mongod status

:$ sudo service mongod start

:$ sudo service mongod stop

:$ sudo service mongod restart
</code></pre>



<p>After the installation the mongoDB server must be started.</p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ sudo service mongod status
● mongod.service - MongoDB Database Server
   Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: https://docs.mongodb.org/manual

patrick@h2866085:~$ sudo service mongod start
patrick@h2866085:~$ sudo service mongod status
● mongod.service - MongoDB Database Server
   Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2021-02-23 08:49:14 CET; 3s ago
     Docs: https://docs.mongodb.org/manual
 Main PID: 27830 (mongod)
   CGroup: /system.slice/mongod.service
           └─27830 /usr/bin/mongod --config /etc/mongod.conf

Feb 23 08:49:14 h2866085.stratoserver.net systemd&#91;1]: Started MongoDB Database Server.
patrick@h2866085:~$
</code></pre>



<p>Then I create the admin user <code>myUserAdmin</code> against the admin db.</p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ mongo
MongoDB shell version v4.4.4
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("8df6776d-4228-43f5-9222-65893d6f146c") }
MongoDB server version: 4.4.4
---
The server generated these startup warnings when booting: 
        2021-02-23T08:49:15.047+01:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
        2021-02-23T08:49:15.718+01:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
        2021-02-23T08:49:15.718+01:00: You are running in OpenVZ which can cause issues on versions of RHEL older than RHEL6
---
---
        Enable MongoDB's free cloud-based monitoring service, which will then receive and display
        metrics about your deployment (disk utilization, CPU, operation statistics, etc).

        The monitoring data will be available on a MongoDB website with a unique URL accessible to you
        and anyone you share the URL with. MongoDB may use this information to make product
        improvements and to suggest MongoDB products and deployment options to you.

        To enable free monitoring, run the following command: db.enableFreeMonitoring()
        To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
&gt; use admin
switched to db admin
&gt; db
admin
&gt; db.createUser({ user: "myUserAdmin", pwd: "yourAdminPassword", roles: &#91;{ role: "userAdminAnyDatabase", db: "admin" }, {"role" : "readWriteAnyDatabase", "db" : "admin"}] })
Successfully added user: {
"user" : "myUserAdmin",
"roles" : &#91;
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
},
{
"role" : "readWriteAnyDatabase",
"db" : "admin"
}
]
}
&gt; db.auth("myUserAdmin", "yourAdminPassword")
1
&gt; show users
{
"_id" : "admin.myUserAdmin",
"userId" : UUID("6b32b520-090b-411b-9569-4ecc70109707"),
"user" : "myUserAdmin",
"db" : "admin",
"roles" : &#91;
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
},
{
"role" : "readWriteAnyDatabase",
"db" : "admin"
}
],
"mechanisms" : &#91;
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
&gt; exit
bye
patrick@h2866085:~$
</code></pre>



<p>Then I open <code>/etc/mongod.conf</code> file in my editor an add the following lines to the file if it does not already exist <code>security: authorization: enabled</code>.</p>



<pre class="wp-block-code"><code>#security:
security:
  authorization: enabled
</code></pre>



<p>Then I restart the mongoDB service.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc$ sudo service mongod restart
</code></pre>



<p>I create a new database <code>dev-bookingsystem</code> and the Db-User <code>myUserDevBookingsystem</code> as the Db-owner.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc$ mongo
MongoDB shell version v4.4.4
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("48d8e3cb-8076-47cb-8166-8e48cfae3d9a") }
MongoDB server version: 4.4.4
&gt; use admin
switched to db admin
&gt; db.auth("myUserAdmin", "yourAdminPassword")
1
&gt; use dev-bookingsystem
switched to db dev-bookingsystem
&gt; db.createUser({ user: "myUserDevBookingsystem", pwd: "yourDbUserPassword", roles: &#91;{ role: "dbOwner", db: "dev-bookingsystem" }] })
Successfully added user: {
"user" : "myUserDevBookingsystem",
"roles" : &#91;
{
"role" : "dbOwner",
"db" : "dev-bookingsystem"
}
]
}
&gt; db
dev-bookingsystem
&gt; exit
bye
</code></pre>



<p>Then I create a default collection in <code>dev-bookingsystem</code>.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/etc$ mongo
MongoDB shell version v4.4.4
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d3494272-8651-4c7e-a781-231f48a60ec4") }
MongoDB server version: 4.4.4
&gt; use admin
switched to db admin
&gt; db.auth("myUserAdmin", "yourAdminPassword")
1
&gt; show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
&gt; use dev-bookingsystem
switched to db dev-bookingsystem
&gt; db.runCommand( { create: "col_default" } )
{ "ok" : 1 }
&gt; show collections
col_default
&gt; show dbs
admin              0.000GB
config             0.000GB
dev-bookingsystem  0.000GB
local              0.000GB
&gt; db
dev-bookingsystem
&gt; exit
bye
patrick@h2866085:/etc$
</code></pre>



<p>I do the same thing for my production database <code>prod-digitaldocblog</code>.</p>



<pre class="wp-block-code"><code>patrick@h2866085:~$ mongo
MongoDB shell version v4.4.4
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("2402f47f-d1dd-4399-931e-b327ea1cf4b4") }
MongoDB server version: 4.4.4
&gt; use admin
switched to db admin
&gt; db.auth("myUserAdmin", "yourAdminPassword")
1
&gt; show dbs
admin              0.000GB
config             0.000GB
dev-bookingsystem  0.000GB
local              0.000GB
&gt; use prod-digitaldocblog
switched to db prod-digitaldocblog
&gt; db.createUser({ user: "myUserProdDigitaldocblog", pwd: "yourDbUserPassword", roles: &#91;{ role: "dbOwner", db: "prod-digitaldocblog" }] })
Successfully added user: {
"user" : "myUserProdDigitaldocblog",
"roles" : &#91;
{
"role" : "dbOwner",
"db" : "prod-digitaldocblog"
}
]
}
&gt; db
prod-digitaldocblog
&gt; show dbs
admin              0.000GB
config             0.000GB
dev-bookingsystem  0.000GB
local              0.000GB
&gt; db
prod-digitaldocblog
&gt; db.runCommand( { create: "col_default" } )
{ "ok" : 1 }
&gt; show collections
col_default
&gt; show dbs
admin                0.000GB
config               0.000GB
dev-bookingsystem    0.000GB
local                0.000GB
prod-digitaldocblog  0.000GB
&gt; exit
bye
patrick@h2866085:~$
</code></pre>



<p>Now I can connect each app with their database with the following string.</p>



<pre class="wp-block-code"><code>mongodb://&lt;youruser&gt;:&lt;yourpassword&gt;@localhost/&lt;yourdatabase&gt;
</code></pre>



<h2 class="wp-block-heading">Deploy the production node app</h2>



<p>To deploy my node app into production I first copy the <code>package.json</code> file into the <code>/var/www/node/prod</code> directory and run <code>npm install</code> to install all dependencies. The directory <code>node_modules</code> now exists in my <code>/var/www/node/prod</code> directory.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/var/www/node/prod$ ls -l
insgesamt 116
-rw-r--r--   1 patrick patrick   615 Jan 18 14:36 package.json
patrick@h2866085: npm install
patrick@h2866085:/var/www/node/prod$ ls -l
insgesamt 116
drwxrwxr-x 220 patrick patrick 12288 Feb 23 13:15 node_modules
-rw-r--r--   1 patrick patrick   615 Jan 18 14:36 package.json
-rw-rw-r--   1 patrick patrick 67705 Feb 23 13:15 package-lock.json
patrick@h2866085:
</code></pre>



<p>Then I copy all the application files on the server into the directory ****<code>/var/www/node/prod</code> .</p>



<pre class="wp-block-code"><code>patrick@h2866085:/var/www/node/prod$ ls -al
insgesamt 132
drwxr-xr-x  10 patrick patrick  4096 Feb 23 13:17 .
drwxr-xr-x   4 root    root     4096 Feb 23 08:56 ..
drwxr-xr-x   3 patrick patrick  4096 Feb 23 13:16 app
drwxr-xr-x   2 patrick patrick  4096 Feb 23 13:16 config
-rw-------   1 patrick patrick   167 Feb 19  2020 .env
-rw-r--r--   1 patrick patrick    33 Feb 19  2020 .env.example
drwxr-xr-x   2 patrick patrick  4096 Feb 23 13:16 middleware
drwxr-xr-x   2 patrick patrick  4096 Feb 23 13:16 modules
drwxrwxr-x 220 patrick patrick 12288 Feb 23 13:15 node_modules
-rw-r--r--   1 patrick patrick   615 Jan 18 14:36 package.json
-rw-rw-r--   1 patrick patrick 67705 Feb 23 13:15 package-lock.json
-rw-r--r--   1 patrick patrick  1281 Mai 17  2020 prod.digitaldocblog.js
drwxr-xr-x   2 patrick patrick  4096 Feb 23 13:17 routes
drwxr-xr-x   7 patrick patrick  4096 Feb 23 13:17 static
drwxr-xr-x   2 patrick patrick  4096 Feb 23 13:17 views
patrick@h2866085:/var/www/node/prod$ nano .env
</code></pre>



<p>I edit my environment variables in the <code>.env</code> file. Here I configure in particular the server port and the database connection string.<br></p>



<pre class="wp-block-code"><code>patrick@h2866085:/var/www/node/prod$ cat .env
port= 3000
host=127.0.0.1
mongodbpath=mongodb://&lt;youruser&gt;:&lt;yourpassword&gt;@localhost/&lt;yourdatabase&gt;
jwtkey=&lt;yourjwtkey&gt;
patrick@h2866085:/var/www/node/prod$
</code></pre>



<p>Finally I start the production application with PM2.</p>



<pre class="wp-block-code"><code>patrick@h2866085:/var/www/node/prod$ pm2 start prod.digitaldocblog.js
</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mongodb and Mongoose on Ubuntu Linux</title>
		<link>https://digitaldocblog.com/database/mongodb-and-mongoose-on-ubuntu-linux/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 28 Feb 2020 08:00:00 +0000</pubDate>
				<category><![CDATA[Database]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[MongoDB]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=105</guid>

					<description><![CDATA[In this article I will show you how to install mongodb on a Ubuntu Linux System. For this I use the step by step installation guide from mongodb. Since the&#8230;]]></description>
										<content:encoded><![CDATA[
<p>In this article I will show you how to install mongodb on a Ubuntu Linux System. For this I use the step by step installation guide from <a href="https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/">mongodb</a>.</p>



<p>Since the installation of mongodb is developed specifically for certain Ubuntu system versions, the system version of the Ubuntu system must first be found out. This is done by entering the following command in the terminal.</p>



<pre class="wp-block-code"><code>:# lsb_release -dc

Description:	Ubuntu 18.04.3 LTS
Codename:		bionic

</code></pre>



<p>So my system is an Ubuntu 18.04.3 LTS (Long Term Support) with the code name <em>bionic</em>. This information is important when, as shown in the following section, the list file entry is created.</p>



<p>The installation itself is carried out with the package manager <code>apt</code>.</p>



<p>Basically there is also a <code>mongodb</code> package managed by Ubuntu that could be installed with the standard <code>apt</code> or <code>apt-get</code> installation from the apt repository. <code>mongodb</code> is not the desired installation and should, if it was installed on the system beforehand, be removed with <code>apt-get remove</code>.</p>



<p>It is therefore recommended to install the <code>mongodb-org</code> package managed by MongoDB Inc.</p>



<p>Since this <code>mongodb-org</code> package comes from a source that has not yet been verified, it is strongly recommended to import the GPG verification key offered by MongoDB Inc. to enable the apt program to check the confidentiality of the package. The corresponding public key must therefore be imported from the mongodb server.</p>



<p>From a terminal, issue the following command to import the MongoDB public GPG Key used by the Ubuntu <code>apt</code> program.</p>



<pre class="wp-block-code"><code>wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -

</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><strong>note</strong>: <code>wget</code> is a program that can be used to download files from FTP or HTTP servers from a terminal. The program is often used to start a download of data with the code of a shell script. However, it is also possible to enter a wget command directly in the terminal and thus get data from servers.</p><p>The <code>-q</code> option prevents <code>wget</code> from displaying information on the console. The option <code>-O &lt;FILE&gt;</code> writes the content of the downloaded file to a file on the local file system. If you specify the <code>-</code> option after the <code>-O</code> option, <code>wget</code> writes the contents of the downloaded file directly to the standard output (stdout) of the terminal. The latter is useful, for example, for reading GPG key files if these keys are then to be added directly to the keychain.</p><p>And that&#8217;s exactly what we want to do with the <code>apt-key</code> command. With <code>apt-key</code>, various commands can be used to perform actions that are carried out with the keychain.</p><p>For example, the <code>apt-key add</code> command adds a new key to the keychain. This key can either be stored in a file and read from there or forwarded from a previous command via the standard output of the terminal (stdout) to the standard input (stdin) for processing <code>apt-key add</code>. The latter is instructed by <code>-</code> at the end of the command.</p></blockquote>



<p>Since this <code>mongodb-org</code> package is located on external mongodb http or ftp servers, these external servers must be made known to the system as confidential sources by making a corresponding list file entry in the <code>/etc/apt/source.list.d</code> directory.</p>



<p>Create the list file <code>/etc/apt/sources.list.d/mongodb-org-4.2.list</code> for your version of Ubuntu.</p>



<pre class="wp-block-code"><code>echo "deb &#91; arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list

</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><strong>note</strong>: With <code>echo</code>, strings and variables can be displayed line by line on the standard terminal output <code>stdout</code>. The string to be output is specified either with single quotation marks <code>'string'</code> or with double quotation marks <code>"string"</code>. Thus with the command <code>echo 'this is an example'</code> &#8222;this is an example&#8220; can be shown directly in the terminal.</p><p>The pipe operator <code>|</code> belongs to the terminal redirects and forwards the output of a command instead of <code>stdout</code> directly to <code>stdin</code> of another command. The subsequent command can then process the result or the output of the first command. In this case here, the output of a string is forwarded to the <code>tee</code> command.</p><p><code>tee</code> reads from the standard input <code>stdin</code> and doubles the read data. The data is then forwarded once to a file and to the standard edition <code>stdout</code>.</p></blockquote>



<p>Now the GPG key has been added to the keychain and apt can use it to verify the mongodb-org package. In addition, the source directory of the installation packages is known to the system as a verified external source in the list file.</p>



<p>Installation with <code>apt</code> can now begin.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><strong>note:</strong> APT (Advanced Packaging Tool) is the command line tool for managing installation packages in Debian Linux. Ubuntu and other Linux distributions that come from Debian Linux also use APT. <code>apt</code> is the successor to <code>apt-get</code> and offers a different but also a wider range of functions. The commands of <code>apt</code> are essentially the same as those of <code>apt-get</code>.</p></blockquote>



<p>Issue the following commands in the terminal.</p>



<pre class="wp-block-code"><code>:# sudo apt-get update

:# sudo apt-get install -y mongodb-org

</code></pre>



<p>The repository index is updated with <code>upgrade</code>. This should always be done before each installation. The <code>-y</code> option ensures that all interactive questions are answered with <em>yes</em> during the installation.</p>



<p>After the installation issue the following commands to check the status, start, stop and restart the <em>mongod</em>.</p>



<pre class="wp-block-code"><code>:# sudo service mongod status

:# sudo service mongod start

:# sudo service mongod stop

:# sudo service mongod restart

</code></pre>



<p>Mongodb comes with 3 standard dbs pre installed:</p>



<ul class="wp-block-list"><li>admin</li><li>config</li><li>local<br></li></ul>



<p>If you want to use mondodb for a project you setup your own database. databases in mongodb contain collections and collections contain the data. So for example you could have the following structure for your project:</p>



<p><code>your-db -&gt; your-db-users-collection -&gt; your-db-users-collection-user-data</code></p>



<p>MongoDB by default does not enable client authentication. So you could now  setup your new database like <em>your-db</em> and connect to it without authentication. This would mean that everyone in the world is able to access your data.</p>



<p>Thefore you should enable client authentication and follow the given steps. Here we will implement a very simple security concept.<br></p>



<p>The following steps include the setup of an admin user in standard admin db and the creation of a project database with a seperate user. The admin user will be able to create databases and manage users for all databases in the mondodb instance. Each seperate project database become a seperate user and this user will be the owner and can do everything.</p>



<p>First we create an admin user in mongo shell. After we create that admin user we test if authentication works with <code>db.auth</code> and <code>exit</code> mongo shell.</p>



<pre class="wp-block-code"><code>:# mongo

&gt; use admin
switched to db admin

&gt; db
admin

&gt; db.createUser({ user: "myUserAdmin", pwd: "adminpassword", roles: &#91;{ role: "userAdminAnyDatabase", db: "admin" }, {"role" : "readWriteAnyDatabase", "db" : "admin"}] })

&gt; db.auth("myUserAdmin", "adminpassword")
1

&gt; show users
{
	"_id" : "admin.myUserAdmin",
	"userId" : UUID("5cbe2fc4-1e54-4c2d-89d1-317340429571"),
	"user" : "myUserAdmin",
	"db" : "admin",
	"roles" : &#91;
		{
			"role" : "userAdminAnyDatabase",
			"db" : "admin"
		},
		{
			"role" : "readWriteAnyDatabase",
			"db" : "admin"
		}
	],
	"mechanisms" : &#91;
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}

&gt; exit

</code></pre>



<p>In this case the admin user got the role <code>userAdminAnyDatabase</code> and the role <code>readWriteAnyDatabase</code>. With userAdminAnyDatabase the admin user will be able to create, update, delete users on all databases except local and config.</p>



<p><code>readWriteAnyDatabase</code> provides all the privileges to read plus the ability to modify data on all non-system collections.</p>



<p>Then open <code>/etc/mongod.conf</code> file in your editor an add the following lines to the file if they do not already exist. Therefore pls. note the <code>security:</code> option (<em>security: authorization: enabled</em>).</p>



<pre class="wp-block-code"><code># mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# Where and how to store data.
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: true
#  engine:
#  mmapv1:
#  wiredTiger:

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# network interfaces
net:
  port: 27017
  bindIp: 127.0.0.1


#processManagement:

#security:
security:
  authorization: enabled

#operationProfiling:

#replication:

#sharding:

## Enterprise-Only Options:

#auditLog:

#snmp:

</code></pre>



<p>Pls. note the <code>net:</code>option (<em>net: port: 27017 bindIp: 127.0.0.1</em>). For security reasons it is necessary that the mongodb instance only connects on the loopback interface (127.0.0.1). Mongodb may only accept connections from this interface and never from outside, such as from an interface that is connected to the internet. Therefore <code>bindIp</code> must be configured with <code>127.0.0.1</code> in <em>mongod.conf</em> as is the case in this configuration above.</p>



<p>Then you should <em>stop</em> and <em>start</em> or simply <em>restart</em> your mongod.</p>



<pre class="wp-block-code"><code>:# sudo service mongod restart

</code></pre>



<p>Authentication is now enabled and we can logon with the admin user to a create a new project database with a user that will be the owner.</p>



<pre class="wp-block-code"><code>:# mongo

&gt; use admin
switched to db admin

&gt; db.auth("myUserAdmin", "adminpassword")
1

&gt; use yourdatabase
switched to db yourdatabase

&gt; db.createUser({ user: "youruser", pwd: "yourpassword", roles: &#91;{ role: "dbOwner", db: "yourdatabase" }] })

&gt; db.auth("youruser", "yourpassword")
1

&gt; exit

</code></pre>



<p>The database owner can perform any administrative action on <em>yourdatabase</em> (database that has just been created by the admin). The owner role is something like the master role and combines privileges granted by the readWrite, dbAdmin and userAdmin roles.</p>



<p>Now you can connect to <em>yourdatabase</em> using the following connection string.</p>



<pre class="wp-block-code"><code>mongodb://youruser:yourpassword@localhost/yourdatabase

</code></pre>



<h3 class="wp-block-heading">Install Mongoose local package in a project directory</h3>



<p>Mongoose offers a schema-based solution to apply modeling and structure to application data and then to save the data in a Mongodb. mongoose is an open source node.js package and is ideal for the development of database-based node js web applications such as node express.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><strong>note:</strong> This article focuses on the installation and commissioning of Mongodb. Mongoose is required for the development of node js applications and is only briefly described here. Scheme and data modeling as well as CRUD application examples are explained in one of the next articles in the context of an express application.</p></blockquote>



<p>Suppose we have created a project directory called <code>express</code> on our computer. Then we will now change to this directory and list all currently installed npm packages with the command <code>npm list --depth 0</code>.</p>



<pre class="wp-block-code"><code>:# cd /software/dev/express

:# npm list --depth 0

</code></pre>



<p>In the project directory, the following command is executed in the terminal to install mongoose for your project.</p>



<pre class="wp-block-code"><code>:# npm install mongoose

server@1.0.0 /home/patrick/software/dev/express
`-- mongoose@5.8.9  extraneous

</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>**note: ** If you have problems with installing packages pls. create the package.json file with all the dependencies and then run <code>npm install</code>.</p></blockquote>



<pre class="wp-block-code"><code>:# cat package.json

{
  "name": "server",
  "version": "1.0.0",
  "private": true,
  "description": "this is test node application",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;&amp; exit 1",
    "start": "node server.js"
  },
  "author": "Patrick Rottländer",
  "license": "ISC",
  "dependencies": {
    "envy": "^2.0.0",
    "express": "^4.17.1",
    "mongoose": "^5.8.3"
  },
  "devDependencies": {}
}

:# npm install

</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Create a user in Ubuntu 18.04 Linux</title>
		<link>https://digitaldocblog.com/server/create-a-user-in-ubuntu-1804-linux/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 09 Feb 2020 07:00:00 +0000</pubDate>
				<category><![CDATA[Server]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://digitaldocblog.com/?p=102</guid>

					<description><![CDATA[A user can be created in ubuntu Linux with the command useradd or adduser. I use adduser because with this perl script, in addition to creating the user in /etc/passwd&#8230;]]></description>
										<content:encoded><![CDATA[
<p>A user can be created in ubuntu Linux with the command <code>useradd</code> or <code>adduser</code>.</p>



<p>I use adduser because with this perl script, in addition to creating the user in <code>/etc/passwd</code> and creating a dedicated user group in <code>/etc/group</code>, the home directory in <code>/home</code> is also created and default files are copied from <code>/etc/skel</code> if necessary.</p>



<p>As root run the command <code>adduser mynewuser</code>.</p>



<pre class="wp-block-code"><code>:# adduser mynewuser

Benutzer »mynewuser« wird hinzugefügt …
Neue Gruppe »mynewuser« (1001) wird hinzugefügt …
Neuer Benutzer »mynewuser« (1001) mit Gruppe »mynewuser« wird hinzugefügt …
Persönliche Ordner »/home/mynewuser« wird erstellt …
Dateien werden von »/etc/skel« kopiert …
Geben Sie ein neues UNIX-Passwort ein:
Geben Sie das neue UNIX-Passwort erneut ein:
passwd: password updated successfully
Changing the user information for mynewuser
Enter the new value, or press ENTER for the default
	Full Name &#91;]: Tech User
	Room Number &#91;]:
	Work Phone &#91;]:
	Home Phone &#91;]:
	Other &#91;]: This user is only for tech
Ist diese Information richtig? &#91;J/N] J

</code></pre>



<p>This create a user in <code>/etc/passwd</code>, create the user group in <code>/etc/group</code> and the home dirrectors in <code>/home</code></p>



<pre class="wp-block-code"><code>:# grep mynewuser /etc/passwd

mynewuser:x:1001:1001:Tech User,,,,This user is only for tech:/home/mynewuser:/bin/bash


:# ls -l /home
drwxr-xr-x 2 mynewuser mynewuser 4096 Jan 23 05:31 mynewuser

:# cat /etc/group
....
....
mynewuser:x:1001:

</code></pre>



<p>The last entry in the <code>/etc/passwd</code> specifies the shell of the new user. With the newly created user, the bash shell is used, which is completely ok for me. However, if there is a need to change the user&#8217;s shell, this can be done with the command <code>usermod --shell</code>.</p>



<pre class="wp-block-code"><code>:# ls -l /etc/shells

# /etc/shells: valid login shells
/bin/sh
/bin/dash
/bin/bash
/bin/rbash
/usr/bin/screen
/bin/tcsh
/usr/bin/tcsh

:# usermod --shell /bin/sh mynewuser

:# grep mynewuser /etc/passwd

mynewuser:x:1001:1001:Tech User,,,,This user is only for tech:/home/mynewuser:/bin/sh

</code></pre>



<p>This newly created user can create files in his home directory and access files. However, it cannot copy files to directories owned by root.</p>



<pre class="wp-block-code"><code>:# nano testfile
:# ls -l
insgesamt 4
-rw-rw-r-- 1 mynewuser mynewuser 24 Jan 23 06:20 testfile

:# cp testfile /etc/nginx/sites-available/testfile
cp: reguläre Datei '/etc/nginx/sites-available/testfile' kann nicht angelegt werden: Keine Berechtigung

:# sudo cp testfile /etc/nginx/sites-available/testfile
&#91;sudo] Passwort für mynewuser:
mynewuser ist nicht in der sudoers-Datei. Dieser Vorfall wird gemeldet.

</code></pre>



<p>Sudo allows the admin of the system to give certain other users on the system (or groups of users) the ability to run some (or all) commands as root.</p>



<p>In my sudo configuration it is already preconfigured that all users in the sudo group can execute all commands. please note the corresponding entry in the sudo file <code>%sudo	ALL=(ALL:ALL) ALL</code>.</p>



<pre class="wp-block-code"><code>:# cat /etc/sudoers

#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults	env_reset
Defaults	mail_badpass
Defaults	secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"

# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
root	ALL=(ALL:ALL) ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL

# Allow members of group sudo to execute any command
%sudo	ALL=(ALL:ALL) ALL

# See sudoers(5) for more information on "#include" directives:

#includedir /etc/sudoers.d

</code></pre>



<p>To make the new user a member of the sudo group, the following command must be executed.</p>



<pre class="wp-block-code"><code>:# sudo usermod -a -G sudo mynewuser

</code></pre>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
