Skip to content

Debian 12 Bookworm

0. Introduction

For production using official Debian 12 packages.

Note: Before proceeding with installation, please review the hardware requirements for running a Compute Resource Node, including special requirements for features like confidential computing.

1. Requirements

In order to run an official Aleph Cloud Compute Resource Node (CRN), you will also need the following resources:

  • CPU (2 options):
    • Min. 8 cores / 16 threads, 3.0 ghz+ CPU (gaming CPU for fast boot-up of microVMs)
    • Min. 12 core / 24 threads, 2.4ghz+ CPU (datacenter CPU for multiple concurrent loads)
    • For confidential computing, specific AMD EPYC™ processors are required
  • RAM: 64GB
  • STORAGE: 1TB (NVMe SSD preferred, datacenter fast HDD possible under conditions, you’ll want a big and fast cache)
  • NETWORK: Minimum 500 MB/s symmetrical, dedicated IPv4, and /64 or larger IPv6 subnet.

You will need a public domain name with access to add TXT and wildcard records.

💡 This documentation will use the invalid vm.example.org domain name. Replace it when needed.

2. Installation

Run the following commands as root:

First install the VM-Connector using Docker:

shell
apt update
apt upgrade
apt install -y docker.io apparmor-profiles
docker run -d -p 127.0.0.1:4021:4021/tcp --restart=always --name vm-connector alephim/vm-connector:alpha

Then install the VM-Supervisor using the official Debian 12 package. The procedure is similar for updates.

shell
# Download the latest release
release=$(curl -s https://api.github.com/repos/aleph-im/aleph-vm/releases/latest | awk -F'"' '/"tag_name":/ {print $4}')
sudo wget -P /opt/ https://github.com/aleph-im/aleph-vm/releases/download/${release}/aleph-vm.debian-12.deb
# Install it
apt install /opt/aleph-vm.debian-12.deb

Reboot if required (new kernel, ...).

3. Configuration

Update the configuration in /etc/aleph-vm/supervisor.env using your favourite editor.

The minimum necessary configuration required is :

  • Setting up the hostname ALEPH_VM_DOMAIN_NAME
  • Override Domain Name Servers and the default network interface if they have not been detected properly.

It is also recommended to set to enable full instances support

If your node has the required hardware, see the detailed instructions on how to enable their support

Hostname

You will want to insert your domain name in the form of:

ALEPH_VM_DOMAIN_NAME=vm.example.org

Network configuration

IPv6 address pool

Each virtual machine receives its own ipv6, the range of IPv6 addresses usable by the virtual machines must be specified manually.

According to the IPv6 specifications, a system is expected to receive an IPv6 with a /64 mask and all addresses inside that mask should simply be routed to the host.

The option takes the form of:

ALEPH_VM_IPV6_ADDRESS_POOL="2a01:4f8:171:787::/64"

Assuming your hosting provider follows the specification, the procedure is the following:

  1. Obtain the IPv6 address of your node.
  2. Remove the trailing number after :: if present, for example 2a01:4f8:171:787::2/64 becomes 2a01:4f8:171:787::/64.
  3. Add the IPv6 range you obtained under the setting ALEPH_VM_IPV6_ADDRESS_POOL in the configuration.

Network Interface

The default network interface is detected automatically from the IP routes. You can configure the default interface manually instead by adding:

ALEPH_VM_NETWORK_INTERFACE=enp0s1

(don't forget to replace enp0s1 with the name of your default network interface).

Domain Name Servers (optional)

You can configure the DNS resolver manually by using one of the following options:

ALEPH_VM_DNS_RESOLUTION=resolvectl
ALEPH_VM_DNS_RESOLUTION=resolv.conf

💡 You can instead specify the DNS resolvers used by the VMs using ALEPH_VM_DNS_NAMESERVERS=["1.2.3.4", "5.6.7.8"].

Volumes and partitions (optional)

Two directories are used to store data from the network:

  • /var/lib/aleph/vm contains all the execution and persistent data.
  • /var/cache/aleph/vm contains data downloaded from the network.

These two directories must be stored on the same partition. That partition must meet the minimum requirements specified for a CRN.

💡 This is required due to the software using hard links to optimize performance and disk usage.

Applying changes

Finally, restart the service:

shell
systemctl restart aleph-vm-supervisor

4. Install a Reverse Proxy

A reverse-proxy is required for production use. It allows:

  • Secure connections to aleph-vm using HTTPS
  • A different domain name for each VM function (if using a wildcard certificate)

HaProxy is required to support the custom Ipv4 domain name feature. (Previously Caddy was recommended). CertBot needs to be installed alongside HaProxy to generate the SSL certificates

0. Enable the configuration file distributed with aleph-vm

Rename /etc/haproxy/haproxy-aleph.cfg to /etc/haproxy/haproxy.cfg to activate its config

bash
sudo mv /etc/haproxy/haproxy-aleph.cfg /etc/haproxy/haproxy.cfg
sudo mkdir -p /etc/haproxy/certs/
sudo systemctl restart haproxy

1. Install Required Packages

bash
sudo apt update
sudo apt install certbot

2. Obtain Initial Certificate

You can either use a single domain certificate (recommended) or a wildcard one.

A wildcard certificate allows the use of different subdomain for each VM function on your node but requires a bit more config.

Option 1: Obtain a Certificate for a Single Domain

Use certbot with the standalone method:

bash
sudo certbot certonly --standalone -d yourdomain.com --http-01-port=8888

If successful, the certs are located in:

bash
/etc/letsencrypt/live/yourdomain.com/
Option 2: Obtain a Wildcard Certificate (for Multiple Subdomains)

A wildcard certificate is recommended to allow any subdomain of your domain to work.

Using a different domain name for each VM function is important when running web applications, both for security and usability purposes.

The VM Supervisor supports using domains in the form https://identifer.yourdomain.com, where identifier is the identifier/hash of the message describing the VM function and yourdomain.com represents your domain name.

We manage one using Let's Encrypt and Certbot with the following instructions. Other certificate providers can also be used.

Automated renewal for wildcard certificate are only supported by Certbot on some select providers, using plugins.

Please refer to the Certbot documentation on which provider are supported on how to set them up.

https://eff-certbot.readthedocs.io/en/latest/using.html#dns-plugins

You can do the generation via the manual method but the automated renewal will not work.

Using certbot with the --manual plugin for DNS challenge verification:

  1. Use the following command to generate the wildcard certificate:
bash
sudo certbot certonly --manual -d 'yourdomain.com' -d '*.yourdomain.com' --preferred-challenges dns --agree-tos --email your-email@example.com
  1. Certbot will prompt you to create a DNS TXT record in your domain's DNS settings. Follow the instructions provided during execution.

  2. After Certbot verifies the DNS record, and the certificate is issued, restart HAProxy:

bash
sudo systemctl restart haproxy

If successful, the certificate files will be located in:

bash
/etc/letsencrypt/live/yourdomain.com/

3. Concatenate Fullchain + Key for HAProxy

HAProxy needs a single .pem file:

bash
sudo mkdir -p /etc/haproxy/certs
sudo cat /etc/letsencrypt/live/yourdomain.com/fullchain.pem /etc/letsencrypt/live/yourdomain.com/privkey.pem | sudo tee /etc/haproxy/certs/yourdomain.com.pem > /dev/null

# Secure permissions
sudo chmod 600 /etc/haproxy/certs/yourdomain.com.pem
sudo chown root:root /etc/haproxy/certs/yourdomain.com.pem

4. Configure HAProxy for TLS

Reload HAProxy:

bash
sudo systemctl reload haproxy

or if not running : sudo systemctl start haproxy

5. Set Up Auto-Renewal with Systemd Timer

Ubuntu and Debian uses systemd by default and certbot comes with a timer:

Check it's active:

bash
systemctl list-timers | grep certbot

You should see:

certbot.timer ...

If not enabled:

bash
sudo systemctl enable certbot.timer
sudo systemctl start certbot.timer

It runs certbot renew twice daily.

6. Automate Concatenation and Reload with a Hook Script

Create a script to be used as a deploy hook:

bash
sudo nano /etc/letsencrypt/renewal-hooks/deploy/haproxy-renew.sh

Paste this into the script:

bash
#!/bin/bash

DOMAIN="yourdomain.com"
CERT_PATH="/etc/letsencrypt/live/$DOMAIN"
OUTPUT_PEM="/etc/haproxy/certs/$DOMAIN.pem"

cat "$CERT_PATH/fullchain.pem" "$CERT_PATH/privkey.pem" > "$OUTPUT_PEM"
chmod 600 "$OUTPUT_PEM"
chown root:root "$OUTPUT_PEM"

/bin/systemctl reload haproxy

and replace yourdomain.com by your domain name

Make it executable:

bash
sudo chmod +x /etc/letsencrypt/renewal-hooks/deploy/haproxy-renew.sh

This script is automatically triggered only if the certificate is actually renewed.

  1. Ensure that aleph-vm is started and working
bash
systemctl start aleph-vm-supervisor
  1. Then open in your browser : http://yourdomain.com

7. To Manually Test Renewal

Run:

bash
sudo certbot renew --dry-run

If it say it fail to bind on port 80, modify /etc/letsencrypt/renewal/yourdomain.com and add under the section [renewalparams]

ini
http01_port = 8888

You will need to do this if you followed previous instruction that did the certbot setup without the --http-01-port=8888 option

8. Custom domain for program support (not required)

To allow users to host their website on their own domain, you will still need to run Caddy to handle the on_demand certificate behind HAPROXY. This is an advanced setup that is not required nor recommended for ordinary node.

To achieve this

  1. You can ignore the instruction on how to generate the certificate for HAproxy
  2. configure Caddy as per the previous documentation but make it bind on port 4442 instead of 443
  3. Edit /etc/haproxy/haproxy.cfg to modify the section bk_default_ssl to point to Caddy:
    haproxy
    backend bk_default_ssl
        mode tcp
        server  127.0.0.1:4442 send-proxy
  4. Restart haproxy

5. Test

Open https://[YOUR DOMAIN] in a web browser, wait for diagnostic to complete and look for

image

If you face an issue, check the logs of the different services for errors:

VM-Supervisor:

shell
journalctl -f -u aleph-vm-supervisor.service

Caddy:

shell
journalctl -f -u caddy.service

VM-Connector:

shell
docker logs -f vm-connector

IPv6 connectivity can be checked by opening the path /status/check/ipv6 on the CRN's URL after restarting the service.

https://vm.example.org/status/check/ipv6

Common errors

"Network interface eth0 does not exist"

Did you update the configuration file /etc/aleph-vm/supervisor.env with ALEPH_VM_NETWORK_INTERFACE equal to the default network interface of your server ?

"Aleph Connector unavailable"

Investigate the installation of the VM-Connector using Docker in step 2.

Advanced Troubleshooting

If you encounter any issues during installation, check the Troubleshooting Guide or reach out to the community for support.