This lab will guide you through deploying and configuring a HashiCorp Consul Cluster, using the version 1.9.2.

You will perform the following steps:

  • Deploy a Consul cluster with 3 servers and 1 client.
  • Do initial configuration of the servers and client.
  • Enable encryption for Consul's gossip protocol.
  • Start the Consul servers and client.
  • Verify the status of the cluster.
  • Generate and deploy TLS certificates to enable secure communication over Consul's consensus, RPC, and HTTP protocols.
  • Enable and configure Consul's ACL system.
  • Configure Anonymous Access for DNS

These steps are important for making your Consul cluster a secure and highly available.

For this lab I'm going to use Raspberry Pi 4 arch64 computers, using Ubuntu, before execute the commands check that you are downloading the right package for your system (Architecture)

Infrastructure

Hostname Name User IP
raspb-ubuntu-1 Server 1 (Bootstrap) ubuntu 192.168.1.1
raspb-ubuntu-2 Server 2 ubuntu 192.168.1.2
raspb-ubuntu-3 Server 3 ubuntu 192.168.1.3
raspb-ubuntu-4 Client 1 ubuntu 192.168.1.4

Hands-On

Deploy a Consul Cluster & initial configuration of the servers and client

First we need to download consul, the next commands are going to be executed on each computer

Download the consul binary (Servers and client)

In my case my architecture is arm64, you can copy the link for the download from https://www.consul.io/downloads and copy the link to your right architecture
curl --silent --remote-name \
https://releases.hashicorp.com/consul/1.9.2/consul_1.9.2_linux_arm64.zip

Download the SHA256 summary files (Servers and client)

curl --silent --remote-name \
https://releases.hashicorp.com/consul/1.9.2/consul_1.9.2_SHA256SUMS
curl --silent --remote-name \
https://releases.hashicorp.com/consul/1.9.2/consul_1.9.2_SHA256SUMS.sig

Unzip the downloaded package and move the consul binary to /usr/local/bin/ (Servers and client)

sudo unzip -o -d /usr/local/bin/ consul_1.9.2_linux_arm64.zip
In case you dont have installed unzip, you can installed with sudo apt install -y unzip or yum install -y unzip

Give root permissions to the consul binary (Servers and client)

sudo chown root:root /usr/local/bin/consul

Check the installed version (Servers and client)

consul --version
#output
Consul v1.9.2
Revision 6530cf370
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

Install the consul CLI autocomplete (Servers and client)

consul -autocomplete-install
complete -C /usr/bin/consul consul

Create group and user for consul (Servers and client)

sudo useradd --system --home /etc/consul.d --shell /bin/false consul

Make the directories, create empty configuration files and give the right permissions (Servers and client)

sudo mkdir --parents /etc/consul.d
sudo touch /etc/consul.d/consul.hcl
sudo chown --recursive consul:consul /etc/consul.d

Create the consul data directory (Servers and client)

sudo mkdir --parents /opt/consul
sudo chown --recursive consul:consul /opt/consul

Remove files that we dont need anymore, change the zip file at your downloaded consul file (Servers and client)

rm ~/consul_1.9.2_SHA256SUMS ~/consul_1.9.2_SHA256SUMS.sig ~/consul_1.9.2_linux_arm64.zip

Now lets write a systemd service, this service will run on the start of the computer in case you need to reboot, use your prefer editor and create a file in /etc/systemd/system/consul.service

sudo nano /etc/systemd/system/consul.service

Paste this data in the file (Servers and client)

[Unit]
Description="Consul Agent"
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/consul.d/consul.hcl

[Service]
Type=notify
User=consul
Group=consul
AmbientCapabilities=CAP_NET_BIND_SERVICE
Capabilities=CAP_NET_BIND_SERVICE+ep
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
ExecStart=/usr/local/bin/consul agent -config-dir=/etc/consul.d/
ExecReload=/usr/local/bin/consul reload
KillMode=process
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

You'll see that Consul agents will be run under the consul user with the command consul agent -config-dir=/etc/consul.d/. This command tells Consul to load all HCL and JSON configuration files it finds in the /etc/consul.d directory. Additionally, the Consul agents will be restarted if they fail

AmbientCapabilities=CAP_NET_BIND_SERVICE
Capabilities=CAP_NET_BIND_SERVICE+ep
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
This is if you want to Consul have the capability to serve in a restricted port

Note this three lines, with this lines you give to consul the capability to bind any service to any restricted port, we don't use this capability in this lab, but I want to point that this is in case you need to use a port for example 443 for UI, you need to have this lines in place, but you can remove this three lines if you are going to use de defaults ports or not restricted ports on your development

Now lets edit the file consul.hcl (Servers and client)

sudo nano /etc/consul.d/consul.hcl

And paste this data (Servers and client)

datacenter = "dc1"
data_dir = "/opt/consul"
log_level = "INFO"
bind_addr = "192.168.1.1"
client_addr = "0.0.0.0"
retry_join = [
  "192.168.1.1:8301",
  "192.168.1.2:8301",
  "192.168.1.3:8301"
]
performance = {
  raft_multiplier = 1
}

Your IP need to be reachable between the servers and client, change your data for each client with your infrastructure you can use IP address or you can use hostnames on retry_join (servers and client)

  • Assigns the agent to datacenter "dc1" using the datacenter setting.
  • Sets Consul's data directory to "/opt/consul" using the data_dir setting.
  • Sets the log level to "INFO" using the log_level setting.
  • Tells each Consul agent which servers to connect to using the retry_join setting. (This makes the agents keep retrying to join the cluster until successful.)
  • Sets the Raft multiplier performance setting to "1" with the raft_multiplier setting. This is the recommended value for production Consul clusters.

Lets create the server.hcl file to tell the servers to run in server mode (Servers)

sudo nano /etc/consul.d/server.hcl

Paste this data (servers)

server = true
bootstrap_expect = 3
ui_config = {
        enabled = true
}

This file tells to run in server mode, and that is expect a total of 3 servers with the bootstrap setting and enables the Consul UI

Note that the client server is not going to have a client setting, if consul does not detect the server configuration in a configuration file or using the flag -server, it will run as client.

Save the file and enable the service (Servers and client)

sudo systemctl daemon-reload
sudo systemctl enable consul.service

Enable encryption for Consul's gossip protocol

Consul uses 4 types of communication protocols

  • The gossip protocol that all Consul agents use to manage the cluster membership
  • The consensus protocol that provides consistency between Consul servers
  • A Remote Procedure Call (RPC) system used for other communications between Consul agents and from the Consul CLI to them
  • HTTP

The gossip protocol is encrypted with encryption keys while the other 3 protocols are secured using end-to-end TLS certificates.

We're going to start our Consul cluster with gossip encryption enabled from the start since adding it to a pre-existing cluster requires several rolling upgrades.

Generate an encryption key, execute this command on any of the servers

consul keygen
#output
PRw0rpDZ2j7wLoovKHynNJF8WHaOCXFNK36/YXJpawA=

Edit the consul.hcl file and add (Servers and client)

encrypt = "PRw0rpDZ2j7wLoovKHynNJF8WHaOCXFNK36/YXJpawA="

To the end of the file (Servers and client)

sudo nano /etc/consul.d/consul.hcl
Change your key to your key
datacenter = "dc1"
data_dir = "/opt/consul"
log_level = "INFO"
bind_addr = "192.168.1.1"
client_addr = "0.0.0.0"
retry_join = [
  "192.168.1.1:8301",
  "192.168.1.2:8301",
  "192.168.1.3:8301"
]
performance = {
  raft_multiplier = 1
}
encrypt = "PRw0rpDZ2j7wLoovKHynNJF8WHaOCXFNK36/YXJpawA="
Each Server need a different bind_addr and client_addr

Export the CONSUL_HTTP_ADDR

export CONSUL_HTTP_ADDR=http://localhost:8500

Start the Consul servers and client

Now is time to start the servers and client (Servers and client)

sudo systemctl start consul

You will see not output unless it generates any errors

Verify the status of the cluster

After you run the command in all the servers and client, you can run consul members, to see the members joining the cluster

consul members
#output
Node            Address           Status  Type    Build  Protocol  DC   Segment
raspb-ubuntu-1  192.168.1.1:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-2  192.168.1.2:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-3  192.168.1.3:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-4  192.168.1.4:8301  alive   client  1.9.2  2         dc1  <default>

You can find the current leader by running the following command in any server

consul operator raft list-peers
#output
Node            ID                                    Address           State     Voter  RaftProtocol
raspb-ubuntu-2  8312e922-ee0b-b58e-d423-7b4580da31bd  192.168.1.2:8300  leader    true   3
raspb-ubuntu-3  9cb5cbeb-17f8-e140-2f66-ccba3a3eda4e  192.168.1.3:8300  follower  true   3
raspb-ubuntu-1  08f89b11-7756-1556-ab01-b8c73fde1c4f  192.168.1.1:8300  follower  true   3

Now you can open the UI using any of the servers IP

http://<IP>:8500/ui

You can check the Nodes on the Nodes button on the UI or going to this direction

http://<IP>:8500/ui/dc1/nodes

You will see that the leader is marked with a start ⭐, you can go a click on the services, to see the health of the nodes

In the further post, we going to secure the consensus, RPC and HTTP communications with TLS certificates, this allows servers, clients, and applications to verify their authenticity.

And we going to create Access Control Lists (ACLs) to secure the UI, API, CLI, service, and agent communications, ACLs operate by grouping rules into policies, then associating one or more policies with a token. ACLs are imperative for all Consul production environments.

References