This is the Part 2 of the post Deploy a HashiCorp Consul Cluster, in the Part one we do

  • Deploy a Consul cluster with 3 servers and 1 client.
  • Do initial configuration of the servers and client.
  • Enable encryption for Consul's gossip protocol.
  • Start the Consul servers and client.
  • Verify the status of the cluster.

In this post we going to take care of

  • Generate and deploy TLS certificates to enable secure communication over Consul's consensus, RPC and HTTP protocols.
  • Enable and configure Consul's ACL system.
  • Configure Anonymous Access for DNS

Infrastructure

Hostname Name User IP
raspb-ubuntu-1 Server 1 (Bootstrap) ubuntu 192.168.1.1
raspb-ubuntu-2 Server 2 ubuntu 192.168.1.2
raspb-ubuntu-3 Server 3 ubuntu 192.168.1.3
raspb-ubuntu-4 Client 1 ubuntu 192.168.1.4

Hands-On

Generate and deploy TLS certificates to enable secure communications over Consul's consensus, RPC and HTTP protocols

In order to prevent unauthorized datacenter access, Consul requires all certificates be signed by the same Certificate Authority (CA). This should be a private CA

The next command ca be done on any Consul server, this server will be called as bootstrap server

consul tls ca create

This will generate a CA certificate, consul-agent-ca.pem, that contains the public key needed to validate Consul certificates and the CA key, consul-agent-ca-key.pem, that will be used to sign certificates for Consul agents and must be kept private. You want to minimize its distribution as much as possible.

The CA certificate and key should be distributed to every Consul server. We include the CA key for servers because any server can generate and sign client certificates for clients when auto encryption is enabled.

Create the folder tls under consul etc folder (Servers and Client)

sudo mkdir /etc/consul.d/tls

To copy the pem to the other two servers we going to create a SSH key and store the key in the other two servers, Change <ssh username> and <IP> for the right data (From bootstrap sever to the remaining servers and client)

sudo scp -o StrictHostKeyChecking=no consul-agent-ca.pem <ssh username>@<IP>:~/.

Co copy the key to the other two servers we going to create a SSH key and store the key in the other two servers (From bootstrap sever to server 2 and 3)

sudo scp -o StrictHostKeyChecking=no consul-agent-ca-key.pem ubuntu@192.168.1.2:~/.
sudo scp -o StrictHostKeyChecking=no consul-agent-ca-key.pem ubuntu@192.168.1.3:~/.

Move the consul-agent-ca.pem  to /etc/consul.d/tls/ (Servers and client)

sudo mv ~/consul-agent-ca.pem /etc/consul.d/tls/

Move the consul-agent-key.pem  to the /etc/consul.d/tls/ (Servers)

sudo mv ~/consul-agent-ca-key.pem /etc/consul.d/tls/

The next step is to create distinct certificates and keys for each of the consul servers, execute this command 3 times (Bootstrap server)

cd /etc/consul.d/tls/
sudo consul tls create -server
#output

==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-0.pem
==> Saved dc1-server-consul-0-key.pem

==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-1.pem
==> Saved dc1-server-consul-1-key.pem

==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-2.pem
==> Saved dc1-server-consul-2-key.pem

Note that the number on these certificates and keys was incremented each time you ran the command. Since we named our servers starting with 1 and would prefer each server to have a certificate and key with matching number, please run the above command once more. This will give you text that includes:

#Output
==> Saved dc1-server-consul-3.pem
==> Saved dc1-server-consul-3-key.pem

Remove the dc1-server-consul-0.pem and dc1-server-consul-0-key.pem

sudo rm /etc/consul.d/tls/dc1-server-consul-0.pem /etc/consul.d/tls/dc1-server-consul-0-key.pem

copy the dc1-server-consul-2.pem and dc1-server-consul-2-key.pem, change ssh user and ip (Bootstrap server)

sudo scp -o StrictHostKeyChecking=no /etc/consul.d/tls/dc1-server-consul-2.pem ubuntu@192.168.1.2:~/.
sudo scp -o StrictHostKeyChecking=no /etc/consul.d/tls/dc1-server-consul-2-key.pem ubuntu@192.168.1.2:~/.

copy the dc1-server-consul-3.pem and dc1-server-consul-3-key.pem, change ssh user and ip (Bootstrap server)

sudo scp -o StrictHostKeyChecking=no /etc/consul.d/tls/dc1-server-consul-3.pem ubuntu@192.168.1.3:~/.
sudo scp -o StrictHostKeyChecking=no /etc/consul.d/tls/dc1-server-consul-3-key.pem ubuntu@192.168.1.3:~/.

Remove the server 2 and 3 pem files from the bootstrap server (Bootstrap server)

sudo rm /etc/consul.d/tls/dc1-server-consul-2.pem /etc/consul.d/tls/dc1-server-consul-2-key.pem /etc/consul.d/tls/dc1-server-consul-3.pem /etc/consul.d/tls/dc1-server-consul-3-key.pem

Move the pem files in the server 2 to /etc/consul.d/tls/ (Server 2)

sudo mv ~/dc1-server-consul-2.pem /etc/consul.d/tls/
sudo mv ~/dc1-server-consul-2-key.pem /etc/consul.d/tls/

Move the pem files in the server 3 to /etc/consul.d/tls/ (Server 3)

sudo mv ~/dc1-server-consul-3.pem /etc/consul.d/tls/
sudo mv ~/dc1-server-consul-3-key.pem /etc/consul.d/tls/

The next step is to modify the server configuration files to use the TLS certs you just generated for them and to set other TLS settings. Since we have a running Consul cluster, we cannot enable all the TLS settings right away. Instead, we'll start with some disabled and do some restarts.

Create a tls.hcl file (Servers)

sudo nano /etc/consul.d/tls.hcl

And paste this data, change the number of the pem depending of the server (Servers)

verify_incoming = false
verify_outgoing = false
verify_server_hostname = false
auto_encrypt = {
  allow_tls = true
}
ca_file =  "/etc/consul.d/tls/consul-agent-ca.pem"
cert_file = "/etc/consul.d/tls/dc1-server-consul-1.pem"
key_file = "/etc/consul.d/tls/dc1-server-consul-1-key.pem"
ports = {
  http = -1
  https = 8501
}

This file do the following

  • Sets verify_incoming to false so that the server will not yet verify certificates from other Consul agents. (But we will be changing it to true later.)
  • Sets verify_outgoing to false so that servers will not yet provide a certificate for outgoing connections. (But we will be changing it to true later.)
  • Sets verify_server_hostname to false so that servers will not yet verify their authenticity for outgoing connections. (But we will be changing it to true later.)
  • Sets auto_encrypt.allow_tls to true so that the server will accept incoming connections from clients with the Consul-generated CA and distribute client certificates to the clients.
  • Sets ca_file, cert_file, and key_file to the certs that you generated and copied into the /etc/consul.d/tls directory on the servers.
  • Disables the HTTP port and sets the HTTPS port to 8501.

Give it the right permissions to the files (servers)

sudo chown --recursive consul:consul /etc/consul.d

Now restart the servers, one at a time, with 10 seconds between (Servers)

sudo systemctl restart consul

The next step is to configure the TLS in the client, create a tls.hcl file (Client)

sudo nano /etc/consul.d/tls.hcl

Paste the data bellow

verify_incoming = false
verify_outgoing = true
verify_server_hostname = true
auto_encrypt = {
  tls = true
}
ca_file = "/etc/consul.d/tls/consul-agent-ca.pem"
ports = {
  http = -1
  https = 8501
}

The file does the following

  • Sets verify_incoming to false so that the client will not yet verify certificates for incoming connections from other Consul agents or applications.
  • Sets verify_outgoing to true so that the client will provide a certificate for outgoing connections.
  • Sets verify_server_hostname to true so that the client will verify the authenticity of servers it connects to.
  • Sets auto_encrypt.tls to true so that the client will request client certificates from servers that it connects to.
  • Sets ca_file, cert_file, and key_file to the certs that you generated and copied into the /etc/consul.d/tls directory on the client.
  • Disables the HTTP port and sets the HTTPS port to 8501.

I'm using the same name for servers and clients tlc.hcl in your case if you are using a VCS, maybe you want to use different name for those files.

Now restart the client (Client)

sudo systemctl restart consul

Now try to run consul members  on any server or client you will get an error

consul members
#Output
Error retrieving members: Get "http://127.0.0.1:8500/v1/agent/members?segment=_all": dial tcp 127.0.0.1:8500: connect: connection refused

This is because the CONSUL_HTTP_ADDR environment variable is currently set to http://localhost:8500 and restarting the servers and client with the new TLS settings disabled the HTTP port.

You can partially address this by running (Servers and client)

export CONSUL_HTTP_ADDR=https://localhost:8501

If you try again consul members in a server, you will get

consul members
#Output
Error retrieving members: Get "https://localhost:8501/v1/agent/members?segment=_all": x509: certificate signed by unknown authority

This error is because the Consul CLI has not yet been told to trust the CA, you can fix that problem exporting the CONSUL_CACERT environment variable (Servers)

export CONSUL_CACERT=/etc/consul.d/tls/consul-agent-ca.pem

Note that if you try to run consul members on the client at this point, you will get a different error message:

consul members
#Output
Error retrieving members: Get https://localhost:8501/v1/agent/members?segment=_all: x509: certificate is not valid for any names, but wanted to match localhost

This is because we have auto-generated cert for the client

Enable the TLS encryption on the Consul cluster, edit the tls.hlc and (Servers)

sudo nano /etc/consul.d/tls.hcl

And change the verify_incoming, verify_outgoing and verify_server to true and save the file in all three servers (Servers)

verify_incoming = true 
verify_outgoing = true 
verify_server_hostname = true 
auto_encrypt = {
  allow_tls = true
}
ca_file =  "/etc/consul.d/tls/consul-agent-ca.pem"
cert_file = "/etc/consul.d/tls/dc1-server-consul-1.pem"
key_file = "/etc/consul.d/tls/dc1-server-consul-1-key.pem"
ports = {
  http = -1
  https = 8501
}

Do not copy and paste or be careful to edit the pem file in each server

Then restart the servers one at a time with 10 seconds between by running (Servers)

sudo systemctl restart consul

You'll know the servers were restarted successfully if no errors were returned

The next step in enabling TLS on your Consul cluster is to configure the Consul CLI to use TLS certificates.

If you try run consul members on any server, you will get an error

consul members
#Output
Error retrieving members: Get "https://localhost:8501/v1/agent/members?segment=_all": remote error: tls: bad certificate

The reason you are now getting errors is because the Consul agents are all verifying incoming connections including from the Consul CLI.

To fix this, you will need to set two additional environment variables, CONSUL_CLIENT_CERT and CONSUL_CLIENT_KEY.

Edit the ~/.bashrc (if you are on red hat the name of the file changes to .bash_profile) (Servers)

nano ~/.bashrc

And add to the end of the file this four lines (Server 1)

export CONSUL_HTTP_ADDR=https://localhost:8501
export CONSUL_CACERT=/etc/consul.d/tls/consul-agent-ca.pem
export CONSUL_CLIENT_CERT=/etc/consul.d/tls/dc1-server-consul-1.pem
export CONSUL_CLIENT_KEY=/etc/consul.d/tls/dc1-server-consul-1-key.pem

Server 2

export CONSUL_HTTP_ADDR=https://localhost:8501
export CONSUL_CACERT=/etc/consul.d/tls/consul-agent-ca.pem
export CONSUL_CLIENT_CERT=/etc/consul.d/tls/dc1-server-consul-2.pem
export CONSUL_CLIENT_KEY=/etc/consul.d/tls/dc1-server-consul-2-key.pem

Server 3

export CONSUL_HTTP_ADDR=https://localhost:8501
export CONSUL_CACERT=/etc/consul.d/tls/consul-agent-ca.pem
export CONSUL_CLIENT_CERT=/etc/consul.d/tls/dc1-server-consul-3.pem
export CONSUL_CLIENT_KEY=/etc/consul.d/tls/dc1-server-consul-3-key.pem

Do the same with the other two servers, changing the number of the pem file

Reboot (Servers)

sudo reboot

Since we are using an auto-generated TLS certificate for the client, we do not have a specific certificate and key to point to on it. In theory, since we have verify_incoming set to false on the client, exporting the CONSUL_HTTP_ADDR and CONSUL_CACERT environment variables should allow the Consul CLI to communicate securely with the local client. However, that does not seem to work in some environments.

If you experience this problem in your own Consul deployment, you have two ways of addressing it:

  • Generate a TLS cert for the CLI on the server where you created the CA certificate using the command consul tls cert create -cli, copy the generated cert and key to the client, and then export the same 4 environment variables you set on the servers.
  • Disable TLS communication for the CLI on the client by setting CONSUL_HTTP_SSL_VERIFY to false.

We're going to use the second approach. Note that this is not really very insecure since the Consul client will still use TLS to talk to the Consul servers and other Consul clients. It is only when the local CLI talks to the local Consul client that TLS is not being used and that can only be done on the client itself.

If in your environment is working, edit the ~/.bashrc (Client)

nano ~/.bashrc

and add this line (Client)

export CONSUL_HTTP_ADDR=https://localhost:8501
export CONSUL_HTTP_SSL_VERIFY=false

If everything is working, you can run consul members on any server or client and you will see all your members alive

consul members
#Output
Node            Address           Status  Type    Build  Protocol  DC   Segment
raspb-ubuntu-1  192.168.1.1:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-2  192.168.1.2:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-3  192.168.1.3:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-4  192.168.1.4:8301  alive   client  1.9.2  2         dc1  <default>

You will notice that you can't access to the UI anymore this is because after enabling TLS, we have to run it on a Consul agent on which we set verify_incoming to false.

In general, it is better to do this on one of your clients rather than on one of your servers. Let's use the one client we have for this.

First let's disable the ui in the servers as this is not going to be needed anymore

edit the file /etc/consul.d/server.hcl (Servers)

sudo nano /etc/consul.d/server.hcl

and remove this lines (Servers)

ui_config = {
        enabled = true
}

and create a file named ui.hcl in the client activating the ui (Client)

sudo touch /etc/consul.d/ui.hcl
sudo bash -c 'echo "ui_config = { enabled = true }" > /etc/consul.d/ui.hcl'

change the permissions (Client)

sudo chown consul:consul /etc/consul.d/ui.hcl

restart consul in the client (Client)

sudo systemctl restart consul

Now you can access to the Consul UI in the client address after acepting the security risk (change it to your client IP address)

https://192.168.1.4:8501/ui

Normally a domain is configured such as consul.example.com for accessing your Consul UI, generate a Consul server certificate that set the -additional-dnsname argument to that domain, and then add the Consul CA to all browsers that need to access the Consul UI, we going to leave this to another lab.

Enable and configure Consul's ACL system.

Consul uses Access Control Lists (ACLs) to secure access to the UI, API, CLI, service communications, and agent communications. When securing your datacenter you should configure the ACLs first. At the core, ACLs operate by grouping rules into policies, then associating one or more policies with a token.

First lets create a file named acl.hcl in the servers (Servers)

sudo nano /etc/consul.d/acl.hcl

And paste this data

primary_datacenter = "dc1"
acl = {
  enabled = true
  default_policy = "deny"
  enable_token_persistence = true
}

This file does the following

  • Sets the primary datacenter to dc1.
  • Enables ACLs on the agent.
  • Sets the default ACL policy to deny all access.
  • Enables persistence of ACL tokens used by the Consul API.

Save the file and change the permission (Servers)

sudo chown consul:consul /etc/consul.d/acl.hcl

Restart the servers

sudo systemctl restart consul

Now, you can bootstrap the ACL system by running this command (Bootstrap server)

consul acl bootstrap > ~/bootstrap.txt

Review the information

cat ~/bootstrap.txt
#Output
AccessorID:       55f791b4-5019-930a-d558-b4955283544d
SecretID:         063873e1-36ef-3881-0c21-e052f2df7d14
Description:      Bootstrap Token (Global Management)
Local:            false
Create Time:      2021-01-26 18:25:08.057194251 +0000 UTC
Policies:
   00000000-0000-0000-0000-000000000001 - global-management

The ACL bootstrap token is the SecretID field of this file. This special Consul ACL token is used to configure your cluster and to generate other ACL tokens.

You'll also need to set an environment variable with your ACL token so that you can use the Consul CLI. Run the following command (Servers and client)

export CONSUL_HTTP_TOKEN=<bootstrap_token>
replacing <bootstrap_token> with your bootstrap token.

Also copy the command that exported your ACL token into your shell's profile by running this command on all 3 servers

echo "export CONSUL_HTTP_TOKEN=$CONSUL_HTTP_TOKEN" >> ~/.bashrc

Next, you need to load an ACL policy on all your servers.

Create a file  named agent-policy.hcl (Servers)

sudo nano ~/agent-policy.hcl

And paste this information, change the name of the node for your node (Servers)

node "raspb-ubuntu-1" {
  policy = "write"
}
service_prefix "" {
   policy = "read"
}

Each instance of this policy gives the agent the write permission for Consul APIs related to its own node and the ability to read all services that might be deployed to it. Note that the write permission includes the read permission.

You need to load the policy (Servers)

consul acl policy create -name raspb-ubuntu-1 -rules @/home/<YOUR USER>/agent-policy.hcl

Change the name for your node and the YOUR USER for your user on the OS

Next, you need to create an agent token and add it to each Consul server. These tokens will have the corresponding policies that you just created. (Servers)

consul acl token create -description "raspb-ubuntu 1 agent token" -policy-name raspb-ubuntu-1
Change the name of the node for each server

Change the name of the policy and node, and you will get a ouput like this (Servers)

AccessorID:       a73d8b95-feac-3092-ebf5-52af57613a95
SecretID:         f0ae1a80-b67a-384f-e4d3-7f96171e9c99
Description:      raspb-ubuntu-1 agent token
Local:            false
Create Time:      2021-01-26 21:50:29.504163163 +0000 UTC
Policies:
   3f3cbd62-4338-c99a-b54f-774fde79a282 - raspb-ubuntu-1

Apply the agent token on the same Consul agent  (servers)

consul acl set-agent-token agent <token>

Where <agent_token> is the token (SecretID) returned by the previous command. This should return "ACL token "agent" set successfully".

Verify that you can run consul members on all 3 servers and that it returns all 3 servers and the client.

consul members
#Output
Node            Address           Status  Type    Build  Protocol  DC   Segment
raspb-ubuntu-1  192.168.1.1:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-2  192.168.1.2:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-3  192.168.1.3:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-4  192.168.1.4:8301  alive   client  1.9.2  2         dc1  <default>

Now that you have finished configuring ACLs on the Consul servers, you'll configure them on the Consul client.

However, because we are using auto-generated TLS certs for the client, we will need to create an ACL policy and token for the client on one of the servers before restarting the client. We'll also have to insert the token into the client's copy of acl.hcl.

We'll create the ACL policy and token on the bootstrap server (Bootstrap server)

nano ~/agent-policy-client.hcl

And put this data (Bootstrap server)

node "raspb-ubuntu-4" {
  policy = "write"
}
service_prefix "" {
   policy = "read"
}

Then create the policy for the client (Bootstrap server)

consul acl policy create -name raspb-ubuntu-4 -rules @/home/<USER NAME>/agent-policy-client.hcl

Change the name for your node name and <USER NAME> for your OS username

The client needs an agent ACL token that uses this policy just like the servers did. Create the token with this command on the bootstrap server

consul acl token create -description "raspb-ubuntu-4 agent token" -policy-name raspb-ubuntu-4

You will get the SecretID like this

AccessorID:       fd9ac24e-f6a8-f99f-1917-20b95bb302a3
SecretID:         4cd0d981-7bfa-b5e8-fa76-93797860be4c
Description:      raspb-ubuntu-4 agent token
Local:            false
Create Time:      2021-01-26 23:38:46.423309638 +0000 UTC
Policies:
   cbc6aa2b-762a-b863-b6ed-47f6a0264127 - raspb-ubuntu-4

Now in the client create the file acl.hcl in the consul.d folder, this file is going to be a little bit different than what we create for the servers (Client)

sudo nano /etc/consul.d/acl.hcl

And paste this data

primary_datacenter = "dc1"
acl = {
  enabled = true
  enable_token_persistence = true
  tokens {
    agent = "<CLIENT_TOKEN>"
  }
}

Change the <CLIENT_TOKEN> for the SecretID we create for the client on the bootstrap server

This file is different from the acl.hcl on the servers because

  • It does not include default_policy = "deny" since default policies are only applied on Consul servers.
  • It includes an agent token with a placeholder for the token that you just created.

Save the file and give it the consul permissions (Client)

sudo chown consul:consul /etc/consul.d/acl.hcl

Restart the client (Client)

sudo systemctl restart consul

Verify that you can run consul members (Client)

consul members
#Output
raspb-ubuntu-1  192.168.1.1:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-2  192.168.1.2:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-3  192.168.1.3:8301  alive   server  1.9.2  2         dc1  <all>
raspb-ubuntu-4  192.168.1.4:8301  alive   client  1.9.2  2         dc1  <default>

You might have noticed that none of the agent nodes are showing up in the Consul UI. This is because we set default_policy to deny in the "acl.hcl" configuration file we applied to each agent.

In terminal in the bootstrap server get the SecretId from the bootsrap

cat bootstrap.txt 
AccessorID:       55f791b4-5019-930a-d558-b4955283544d
SecretID:         063873e1-36ef-3881-0c21-e052f2df7d14
Description:      Bootstrap Token (Global Management)
Local:            false
Create Time:      2021-01-26 18:25:08.057194251 +0000 UTC
Policies:
   00000000-0000-0000-0000-000000000001 - global-management

Copy the SecretID and paste it in the login page you will see the services again (is posible that you need to refresh after enter the token)

You should now see all 3 Consul servers and the client in the UI.

Configure Anonymous Access for DNS

Consul automatically creates and uses an anonymous token for requests that do not provide one. This includes requests from the Consul UI when no ACL token has been set and requests sent to Consul's DNS Interface. By default the anonymous token has no ACL policies associated with it.

Since DNS requests sent to Consul cannot provide an ACL token, we want to add a policy with some read capabilities to the anonymous token.

In the bootstrap server create a file and name it anonymous-policy.hcl (Bootstrap server)

nano ~/anonymous-policy.hcl

And past this data (Bootstrap server)

node_prefix "" {
  policy = "read"
}
service_prefix "" {
  policy = "read"
}
# only needed if using prepared queries
query_prefix "" {
  policy = "read"
}

Now let's create the policy, change <USER NAME> by the user on your OS (Bootstrap server)

consul acl policy create -name anonymous -rules @/home/<USER NAME>/anonymous-policy.hcl

And update the acl token (Bootstrap server)

consul acl token update -id anonymous -policy-name anonymous

To test that Consul's DNS interface is working, please run this command (Any)

dig @127.0.0.1 -p 8600 consul.service.consul
#Output
AccessorID:       00000000-0000-0000-0000-000000000002
SecretID:         anonymous
Description:      Anonymous Token
Local:            false
Create Time:      2021-01-26 18:23:38.231375207 +0000 UTC
Policies:
   155abc0e-61e7-6ef5-3a99-b6f0c5bb4ed4 - anonymous

ubuntu@raspb-ubuntu-1:~$ dig @127.0.0.1 -p 8600 consul.service.consul

; <<>> DiG 9.16.1-Ubuntu <<>> @127.0.0.1 -p 8600 consul.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8120
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;consul.service.consul.		IN	A

;; ANSWER SECTION:
consul.service.consul.	0	IN	A	192.168.1.1
consul.service.consul.	0	IN	A	192.168.1.3
consul.service.consul.	0	IN	A	192.168.1.2

;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Wed Jan 27 00:21:41 UTC 2021
;; MSG SIZE  rcvd: 98

That's all, in further post we going to use the Consul cluster to make a HA backend for Vault.

Resources