In this post we going to prepare the operator computer and install RKE

Requirements

  • Read part 1 and 2

Hands-On

Prepare operator computer

  1. We need to prepare the operator computer with the RKE binary, we can search for the binary in the next link
Releases · rancher/rke
Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution that runs entirely within containers. - rancher/rke

2. Once you found the right architecture for your cpu, do a right click and select "Copy link location"

Check that is not a pre-release, download only the lastest release or release

3. Open a terminal in the operator computer and use wget and copy the link to download the binary

wget https://github.com/rancher/rke/releases/download/v1.2.1/rke_linux-amd64

4. Give permissions to execute

chmod +x rke_linux_amd64

5. Check the environment path to be able to execute everywhere, use the command echo $PATH to know what are the path's directory

echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

6. Move the vault binary to a PATH environment folder and rename it, personally I prefer /usr/local/bin/ because is always empty (most of the time)

sudo mv rke_linux_amd64 /usr/local/bin/rke

7. Check the installation

rke --version
rke version v1.2.1

8. Now we going to install kubectl the command line to communicate with a Kubernetes cluster, lets start installing the dependencies

sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl

9. Download the kubectl binary with this command

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"

10. Give permissions to execute

chmod +x kubectl

11. Move the binary to a $PATH folder

sudo mv kubectl /usr/local/bin/

12. Check installation of the binary

kubectl version --client
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

13. Install Helm, get the binary from this link

Releases · helm/helm
The Kubernetes Package Manager. Contribute to helm/helm development by creating an account on GitHub.

14. Find your architecture version and do a right click and select "Copy Link Location"

15. Get the binary with wget and paste the link

wget https://get.helm.sh/helm-v3.4.0-linux-amd64.tar.gz

16. Unpack the file

tar -zxvf helm-v3.4.0-linux-amd64.tar.gz

17. Move the binary to a $PATH folder

sudo mv linux-amd64/helm /usr/local/bin

18. Test the binary

helm version
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}

Install RKE

19. Now we are ready to install the RKE cluster, first we need to create the configuration file, we going to call it "cluster.yml", this file is a YAML, if you want to know more about YAML you can find it here

nano cluster.yml

20. Add the node configuration

nodes:
  - address: 192.168.1.50
#    internal_address:
    user: ubuntu
    role: [controlplane, worker, etcd]
  - address: 192.168.1.51
#    internal_address:
    user: ubuntu
    role: [controlplane, worker, etcd]
  - address: 192.168.1.52
#    internal_address:
    user: ubuntu
    role: [controlplane, worker, etcd]

In this area we specify the nodes, each hyphen (-) represents a node and their attributes, lets explain more in detail

Variable Option required
address external address to comunicate with the cluster yes
internal_address internal address for communication between the nodes no
user user that will use the docker commands yes
role controlplane, worker, etcd at least one
If you have an external IP and you are going to use RKE clusters over internet, you need to put your external and internal address

In my case I'm going to use all the 3 roles on the nodes, as this is going to be used as the Rancher server, is recommended though that in every cluster you divide at least the workers in case a workload interfere with the cluster function, in this case is not a problem, as we going to dedicate the cluster to the Rancher server

Now that we know the node part, I'm going to use some other configuration, is important that you check the configuration for your requirements, and as is very well explained in the Rancher RKE installation page, you can check it here and here

I'm going to copy this configuration from the web site, and at the end your file will look like this one

nodes:
  - address: 192.168.1.51
#    internal_address:
    user: ubuntu
    role: [controlplane, worker, etcd]
  - address: 192.168.1.52
#    internal_address:
    user: ubuntu
    role: [controlplane, worker, etcd]
  - address: 192.168.1.53
#    internal_address:
    user: ubuntu
    role: [controlplane, worker, etcd]

# If set to true, RKE will not fail when unsupported Docker version
# are found
ignore_docker_version: false

# Cluster level SSH private key
# Used if no ssh information is set for the node
ssh_key_path: ~/.ssh/rke

# Enable use of SSH agent to use SSH private keys with passphrase
# This requires the environment `SSH_AUTH_SOCK` configured pointing
#to your SSH agent which has the private key added
ssh_agent_auth: false

# Set the name of the Kubernetes cluster  
cluster_name: rancher

# Specify network plugin-in (canal, calico, flannel, weave, or none)
# We going to use flannel work with arm64 architecture, change your iface
network:
    plugin: flannel
    options:
        flannel_iface: eth0
        flannel_backend_type: vxlan

# Use this network plug-in if you are using amd64 architecture 
# network:
#    plugin: canal
#    options:
#        canal_iface: eth1
#        canal_flannel_backend_type: vxlan


# Etcd snapshots
services:
  etcd:
    backup_config:
      interval_hours: 12
      retention: 6
    snapshot: true
    creation: 6h
    retention: 24h
    
# Currently only nginx ingress provider is supported.
# To disable ingress controller, set `provider: none`
# `node_selector` controls ingress placement and is optional
ingress:
    provider: nginx
    options:
     use-forwarded-headers: "true"
Something I need to point here, is that the network plug-in by default is canal, I'm going to use flannel, because I have arm64 architecture, and is recommended to use, if you are going to use amd64, please comment the flannel network plug-in and uncomment the canal plug-in, you can read more about network plug-ins here

21. Let's deploy the cluster

rke up

22. At the end, you will get something like this

INFO[0304] [addons] CoreDNS deployed successfully       
INFO[0304] [dns] DNS provider coredns deployed successfully 
INFO[0304] [addons] Setting up Metrics Server           
INFO[0304] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0304] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0304] [addons] Executing deploy job rke-metrics-addon 
INFO[0319] [addons] Metrics Server deployed successfully 
INFO[0319] [ingress] Setting up nginx ingress controller 
INFO[0319] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0319] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0319] [addons] Executing deploy job rke-ingress-controller 
INFO[0334] [ingress] ingress controller nginx deployed successfully 
INFO[0334] [addons] Setting up user addons              
INFO[0334] [addons] no user addons defined              
INFO[0334] Finished building Kubernetes cluster successfully 
check for any warnings or errors

23. When it finish, you will have this three files on your folder cluster.rkestate, cluster.yml and kube_config_cluster.yml store this files in a safe place, this files contains the state of your cluster and the access for kubectl

Rancher recommends to encrypt the file cluster.rkstate

24. Use the file kube_config_cluster.yml to configure your kubectl, exporting KUBECONFIG

export KUBECONFIG=./kube_config_cluster.yml

25. Try the kubectl connection to the Kubernetes cluster with get nodes

kubectl get nodes
NAME             STATUS     ROLES                      AGE   VERSION
ubuntu-1         Ready   controlplane,etcd,worker      10m   v1.19.3
ubuntu-2         Ready   controlplane,etcd,worker      10m   v1.19.3
ubuntu-3         Ready   controlplane,etcd,worker      10m   v1.19.3

In the next post we going to install Rancher server over the Kubernetes cluster

Clean-Up

To remove the cluster you only need to use the command rke remove using the cluster.rkstate and cluster.yml

rke remove
INFO[0000] Running RKE version: v1.2.1                  
Are you sure you want to remove Kubernetes cluster [y/n]:y
...
INFO[0038] Removing local admin Kubeconfig: ./kube_config_cluster.yml 
INFO[0038] Local admin Kubeconfig removed successfully  
INFO[0038] Removing state file: ./cluster.rkestate      
INFO[0038] State file removed successfully              
INFO[0038] Cluster removed successfully        

References: