Kubernetes : Deploy a production ready cluster with Ansible.

  • Can be deployed on AWS, GCE, Azure, OpenStack, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal
  • Highly available cluster
  • Composable (Choice of the network plugin for instance)
  • Supports most popular Linux distributions
  • Continuous integration tests

Hardware requirements

For Kubernetes production ready deployments is require :

  • 4 GB or more of RAM per machine (any less will leave little room for your apps)
  • 2 CPUs or more (less than this will cause a deployment error)

Environment

  • 3 Masters
  • 3 Workers
  • 3 Etcd Hosts
  • External Load balancer with HAProxy for Kubernetes API

Ansible host installation

Before start the requirements create the ssh keypair

ssh-keygen -t rsaGenerating public/private rsa key pair.
Enter file in which to save the key (/home/demo/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/demo/.ssh/id_rsa.
Your public key has been saved in /home/demo/.ssh/id_rsa.pub.
The key fingerprint is:
4a:dd:0a:c6:35:4e:3f:ed:27:38:8c:74:44:4d:93:67 demo@a
The key's randomart image is:
+--[ RSA 2048]----+
| .oo. |
| . o.E |
| + . o |
| . = = . |
| = S = . |
| o + = + |
| . o + o . |
| . o |
| |
+-----------------+
ssh-copy-id root@host
yum install -y python3-pip python-netaddr
cd /opt/
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
sudo pip3 install -r requirements.txt
  • ansible_host : configure for public ip
  • ip : variable for bind kubernetes services
  • etcd_member_name : you should set etcd_member_name for etcd cluster.
[all]
etcd1 ansible_host=publicip ip=privateip etcd_member_name=etcd1
etcd2 ansible_host=publicip ip=privateip etcd_member_name=etcd2
etcd3 ansible_host=publicip ip=privateip etcd_member_name=etcd3
master1 ansible_host=publicip ip=privateip
master2 ansible_host=publicip ip=privateip
master3 ansible_host=publicip ip=privateip
worker1 ansible_host=publicip ip=privateip
worker2 ansible_host=publicip ip=privateip
worker3 ansible_host=publicip ip=privateip
[kube-master]
master1
master2
master3
[etcd]
etcd1
etcd2
etcd3
[kube-node]
worker1
worker2
worker3
[calico-rr][vault]
master1
master2
master3
[all]
host ansible_host=10.10.10.100 ip=10.10.10.100

HAPROXY setup

For a manual installation this document can help you.

mkdir /opt/playbooks
- hosts: loadbalancer,masters
gather_facts: yes
become: true
tasks:
- name: Install Haproxy packages
yum:
name: haproxy
state: present
when: "'loadbalancer' in group_names"
- name: Haproxy conf template
template:
src: ./haproxy.conf.j2
dest: /etc/haproxy/haproxy.cfg
mode: 0644
when: "'loadbalancer' in group_names"
- name: Semanage allows http 6443 port
seport:
ports: "{{ item }}"
proto: tcp
setype: http_port_t
state: present
when: "'loadbalancer' in group_names"
loop:
- 6443
- 9000
- name: Start Haproxy
service:
name: haproxy
enabled: yes
state: restarted
when: "'loadbalancer' in group_names"
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen stats :9000
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats
stats auth admin:password
stats refresh 30
mode http
frontend main *:{{ balancer_listen_port|default('6443') }}
default_backend {{ balancer_name | default('mgmt6443') }}
option tcplog
backend {{ balancer_name | default('mgmt6443') }}
balance source
mode tcp
# MASTERS 6443
{% for host in groups.masters %}
server {{ host }} {{ hostvars[host].ansible_default_ipv4.address }}:6443 check
{% endfor %}
vi /etc/ansible/hosts[loadbalancer]
loadbalancer
[masters]
master1
master2
master3
ansible-playbook haproxy.yml

Kube Spray setup

Go to kubespray directory

  • dashboard_enabled : Consider to disable the Kubernetes default dashboard for security reasons and use a reliable configuration after this deployment.
  • local_volume_provisioner_enabled and cert_manager_enabled : Enable the local volume provisioner so persistent volumes can be used and the cert manager to later be able to automatically provision SSL certs using Let’s Encrypt.
vi inventory/mycluster/group_vars/k8s-cluster/addons.yml...
dashboard_enabled: false
local_volume_provisioner_enabled: true
cert_manager_enabled: true
  • kube_network_plugin : Choose the Sdn plugin, normally I use calico.
  • kubeconfig_localhost: Made kubespray generate a kubeconfig file on the computer used to run Kubespray.
vi inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml...
kube_network_plugin: calico
kubeconfig_localhost: true
vi inventory/mycluster/group_vars/all/all.yml## External LB example config
apiserver_loadbalancer_domain_name: "loadbalancer.local.lab"
loadbalancer_apiserver:
address: 192.168.15.198
port: 6443
#Upstream dns servers
upstream_dns_servers:
- 8.8.8.8
- 8.8.4.4
ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml
yum downgrade docker-ce-cli-19.03.7-3.el7.x86_64  -y
ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b -v --private-key=~/.ssh/id_rsa -K
ansible-playbook -i inventory/mycluster/inventory.ini reset.yml

Deployment tests

Install the Kubernetes client

yum install -y kubernetes-client
export KUBECONFIG=/opt/kubespray/inventory/mycluster/artifacts/admin.conf
[root@host  ~]# kubectl cluster-info
Kubernetes master is running at https://loadbalancer.local.lab:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.p
[root@host  ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://loadbalancer.local.lab:6443
name: cluster.local
contexts:
- context:
cluster: cluster.local
user: kubernetes-admin
name: kubernetes-admin@cluster.local
current-context: kubernetes-admin@cluster.local
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
kubectl expose deployment/nginx-deployment --type="NodePort" --port 80
[root@master1 ~]# kubectl get service nginx-deployment
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment NodePort 10.233.9.255 <none> 80:31660/TCP 4m7s
[root@master1 ~]# curl http://10.233.9.255
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Launching the Dashboard

If you were forgot to disabled it during the deployment just run this command to delete

kubectl delete deployments kubernetes-dashboard -n kube-system
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc2/aio/deploy/recommended.yaml
kubectl create serviceaccount fajlinux -n default
kubectl create clusterrolebinding fajlinux-admin -n default --cluster-role=cluster-admin --serviceaccount=default:fajlinux
kubectl get secret $(kubectl get serviceaccount fajlinux -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store