Kubespray yordamida production uchun Kubernetes Cluster sozlash
Kirish
Bundan oldi biz Kubernetes klaster yaratish va sozlash(kubeadm) (opens in a new tab) qo'llanmasida biz on-premises serverlarda kubeadm yordamida Kubernetes yaratib ishlashga tayyorlagandik. Bugun biz Kubespray yordamida Production uchun Kubernetes cluster sozlab ishlashgacha tayyorlaymiz.
Kubespray (opens in a new tab) - Ansibleda qurilgan open-source (opens in a new tab) loyiha bo'lib, Kubernetes klasterni oson deploy qilish va boshqarishni avtomatlashtirish uchun mo'ljallangan Ansible playbook hisoblanadi.
Ishni boshlash
Bugungi amaliyotda biz Kubespray yordamida production uchun Kubernetes cluster sozlab ishga tushiramiz, biz bunda Kubespray, Helm, NGINX Ingress, Flannel, Promethus/Grafana, Longhorn, Certmanager va boshqa toollardan foydalanmiz.
Amaliyot uchun minimum 4-ta server kerak bo'ladi, ya'ni bitta Ansible server va 3-ta minimum Kubernetes klaster uchun serverlar.
Ushbu amaliyot uchun kerak bo'ladigan serverlar.
Minimum Server talabi
OS | RAM | CPU | Xotira | Static IP | Server nomi |
---|---|---|---|---|---|
Ubuntu 20.04 | 4GB | 2vCPU 1 core | 50GB | Ha kerak | ansible-server |
Ubuntu 20.04 | 8GB | 8vCPU 4 core | 50GB | Ha kerak | k8s-master |
Ubuntu 20.04 | 4GB | 4vCPU 2 core | 50GB | shart emas | k8s-node1 |
Ubuntu 20.04 | 4GB | 4vCPU 2 core | 50GB | shart emas | k8s-node2 |
Umumiy hisobda bizga 4-ta server, bitta ansible server va 3-ta k8s klaster uchun server(k8s-master,k8s-node1,k8s-node2) kerak bo'ladi.
Network sozlash
Kubernetes klaster sozlash uchun birinchi navbatda Kubernetes klaster uchun Network sozlab olishimiz kerak bo'ladi. Har qanday muhit uchun Cloud yoki on--premises bo'ladimi subnet sozlab olish kerak bo'ladi va yuqorida ko'rsatilgan barcha serverlar bitta subnetda bo'lishi kerak chunki ular bir-biri bilan local tarmoqda Internal IPlar bilan bog'lanib ishlaydi.
Ushbu amaliyotda biz 172.23.0.15/24 IP range(diapazon)li subentdan foydalandik va biz Google Cloud Serverlaridan foydalandik. Serverlar Internal IP'lariga e'tibor berib qarang ular hammasi 172.23.0.0/24 subnetda va ular bir bir bilan local tarmoq orqali bo'glanaib ishlay oladi. Siz Cloud ishlatasizmi yoki on-premises buni ahamiyati yo'q, bu yerda e'tibor berish kerak bo'lgan narsa bu Subnet local tarmoq hisoblanadi.
Agar siz Microsoft Azure yoki Google Cloud ishlatadigan bo'lsangiz platformamizda Azure va GCP'da Network sozlash qo'llanmalarimizni ko'rib chiqishingiz mumkin.
-> Azure Network sozlash (opens in a new tab)
-> GCP Network sozlash (opens in a new tab)
Ansible server sozlash
Kubernetes klaster uchun Network sozlab olganimizdan keyin Kubesprayni ishga tushirish uchun Ansible server solzab olishimiz kerak bo'ladi.
1-> Ansible serverimizni yangilab kerakli utilatalarni o'rnatib olamiz.
sudo apt update && sudo apt upgrade -y
sudo apt-get install software-properties-common ca-certificates curl gnupg zip unzip -y
2-> Python o'rnatib olamiz.
Custom APT repositoriyani qo'shib olamiz
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
Python 3.10(venv,dev) o'rnatamiz
sudo apt install python3.10 python3.10-venv python3.10-dev
python3 --version
Python paket manageri pip o'rnatamiz.
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
python3 -m pip --version
3-> Ansible o'rnatamiz.
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible -y
Ansible serverimzida Ansible va Python o'rnatib olganimizdan keyin Kubesprayni sozlashni boshlasak bo'ladi.
Kubespray github repositoriyasidan Releases (opens in a new tab)'ga o'tib latest versiyani .zip yuklab olamiz.
sudo wget https://github.com/kubernetes-sigs/kubespray/archive/refs/tags/v2.24.1.zip
Kubesprayni zipdan chiqarib olamiz va papka nomini kubespray'ga o'zgartiramiz.
sudo unzip v2.24.1.zip
mv kubespray-2.24.1/ ./kubespray
Kubespray ansible playbookni ishga tushirish uchun ansible-serverimiz boshqa serverlarga internal IPlari orqali root userga ssh orqali ulana olishi kerak, shuning uchun ansible-serverda ssh-key generatsiya qilib uni public keyini boshqa serverlarda root userga kirib authorized_keysga qo'shib qo'yishimiz va ansible-serverdan turib ssh ulanishni tekshirib ko'rishimiz kerak.
ansible-serverimizda root userga kirib ssh key generatsiya qilamiz.
sudo su
ssh-keygen
ssh papkaga kirib id_rsa.pub public keyni cat bilan chiqarib olib uni buferga nusxalab olamiz va boshqa serverlarimizda root userdan kirib authorized_keys'ga qo'shib qo'yamiz.
Ansible serverda:
cd ~/.ssh/
ls
cat id_rsa.pub
Boshqa serverlarda:
sudo su
cat >> ~/.ssh/authorized_keys <<EOF
bu yerga ansible serverdan olgan id_rsa.pub ssh-keyimizni joylashtiramiz
EOF
ansible server public ssh keyini boshqa serverlar authoirized_keys fayliga qo'shib bo'lganimizdan keyin ansible-serverdan turib root user bilan boshqa sereverlar internal IP lari orqali ssh ulanishni tekshiramiz.
Ansible serverdan:
ssh root@172.23.0.16
ssh root@172.23.0.17
ssh root@172.23.0.18
Okeey hammasiga root user bilan Internal IP'lari orqali ssh bilan ulana olganimizdan keyin kubesprayni solashni boshlasak bo'ladi.
Kubesprayni ishga tushirishdan oldin kubespray uchun python venv yaratib kerakli python paketlarni o'rnatib olamiz.
Ansible serverda:
VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
python3 -m venv $VENVDIR
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR
pip install --upgrade pip
pip install -U -r requirements.txt
Kubespray uchun kerakli paketlarni o'rnatib olganimizdan keyin /kubespray/inventory/sample
papkadan o'zimizga klaster sozlab olamiz. Biz bu yerda mycluster
nomli papkaga sample papkadagi kerakli fayllarni ko'chiramiz.
cp -rfp inventory/sample inventory/mycluster
Bu buyruq inventory.py
script orqali berilgan IP manzillarni dinamik tarzda hosts.yaml ga yozib chiqadi.
declare -a IPS=(172.23.0.16 172.23.0.17 172.23.0.18)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Tekshirib ko'rish uchun, bu yerda biz kiritgan serlerimiz IP'lari bo'lishi kerak.
cat inventory/mycluster/hosts.yaml
Kubesprayni deyarli ishlash tayyor, endi ba'zi konfiglarni ko'rib cghiqamiz. Kubespray default holda Network uchun Calico ishlatadi siz o'z talablaringizga mos ravishda uni o'zgartirishingiz mumkin, biz bu amalaiyotda calico o'rniga flannel ishlatamiz.
Calicodan flannela o'zgartirish uchun k8s-cluster.yml
faylidan o'zgartiramiz.
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
keyin addons.yml
konfiguratsiya faylidan helm o'rnatishni yoqib qo'yamiz. addons.yml
konfiguratsiya faylidan kerakli Kubernetes addonslarni sozlab olsangiz bo'ladi.
nano inventory/mycluster/group_vars/k8s_cluster/addons.yml
Okeyy bizda hammasi tayyor Kubesprayni ishga tushirsak bo'ladi
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
Bu buyruq cluster.yml konfiguratsiyani ishga tushirish orqali berilgan serverlarimizga Kubernetes claster yaratib sozlashni boshlaydi bu serverlardagi internet tezligi va kuchgiga qarab 15 daqiqa yokim undan ko'proq vaqt olishi mumkin.
Kubespray o'z ishini tugatganidan keyin node1 severga kirib klasterimiz statusini ko'rishimiz mumkin.
Kubernetes klaster nodelarni ko'rish uchun:
alias kubectl='sudo kubectl'
kubectl get nodes
Podlarni ko'rish uchun
kubectl get pods -A
Okey biz Kubespray yordamida muvaffaqiyatli Kubernetes klasterimiz yaratib oldik, endi keyingi bosqichga Kubernetes Network sozlash bo'limiga o'tsak bo'ladi.
Kubernetes Network sozlash
Kubespray yordamida Kubernetes klaster yaratib olganimizdan keyin Kubernetes klasterimiz tarmog'ini sozlab olishimiz kerak bo'ladi.
MetalLB
MetalLB - bare metal Kubernetes klasterlari uchun load balancer. On-primese muhitlarda, ayniqsa bare metal infratuzilmasi bo'lganlarda, servicelar ko'pincha tashqi ko'rinishga ega bo'lish usuliga muhtoj. Boshqariladigan servicelar sifatida load-balancerlari mavjud cloud muhitlardan farqli o'laroq, on-premise sozlashlarda bunday infratuzilma bo'lmasligi mumkin.
Kubernetes klasterini bare-metallga o'rnatganimizda, LoadBalancer turidagi servicelar ishlamaydi, chunki tashqi load balanceri mavjud emas. Bu yerda MetalLB keladi. MetalLB on-primese muhitda yaxshi ishlaydigan dasturiy ta'minotga asoslangan load-balancerni ta'minlaydi. U tashqi trafikni klasterdagi nodelar bo'ylab tarqatadi.
3-> MetalLB'ni o'rnatishdan oldin kube-proxy Configmapni sozlab olishimiz kerak.
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system
kube-proxy ConfigMap-ni o'zgartirish: kubectl get configmap kube-proxy buyruqlari joriy kube-proxy ConfigMapni olish uchun ishlatiladi, so'ngra sed strictARP qiymatini falsedan truega o'zgartirish uchun ishlatiladi. Ushbu modifikatsiya MetalLB to'g'ri ishlashi uchun talab qilinadi. Keyin kubectl apply -f - buyrug'i o'zgartirilgan ConfigMapni qo'llash uchun ishlatiladi.
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
4-> MetalLBdan oldin sozlash yakunlangandan keyin MetalLBni helm orqali o'rnatib olamiz. Kubespray ishga tushirib kubernetes cluter yaratganimizda helm ham o'rnatiladi ya'ni helm o'rnatishga hojat yo'q.
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm install metallb metallb/metallb --namespace metallb-system --create-namespace
helm metallb repositoriyani qo'shadi va repositoriyalarni yangilab oladi, keyin metallb nomli namespaceda metallbni o'rnatadi va ishga tushiradi. Har bir dastur yoki tool uchun alohida namespace ochib ishlash Kubernetesda yaxshi amaliyot.
MetalLB ishlab turgani bilish uchun statusini ko'ramiz.
# barcha namescpaselar ro'yxatini ko'rish uchun
kubectl get ns
# metallb-system namespace podlarini ko'rish
kubectl get pods -n metallb-system
5-> MetalLB o'rnatganimizdan keyin MetalLB Address Pool sozlashimiz kerak.
metallb
nomli jild ochib address-pool.yaml
fayl ochib yaml configuratsiya yozamiz.
mkdir metallb
cd metallb
nano address-pool.yaml
MetalLB Address Pool configuratsiyamiz quyidagicha. MetalLB rasmiy qo'llanmasi (opens in a new tab)
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.23.0.20-172.23.0.40
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: first-pool
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
Ushbu konfiguratsiya ikki qismdan iborat IPAddressPoll
va L2Advertisement
.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.23.0.20-172.23.0.40
-
apiVersion
vskind
Bu maydonlar API versiyasi va resource turini belgilaydi. Bu holda, bu metallb.io/v1beta1 API guruhidagi IPAddressPool. -
metadata -> Resource, jumladan uning nomi va namespace haqidagi metama'lumotlarini o'z ichiga oladi.
-
spec -> Ushbu bo'lim resourcening kerakli holatini belgilaydi.
-
addresses -> MetalLB LoadBalancer turidagi servicelarga ajratishi mumkin bo'lgan bir qator IP manzillarni belgilaydi. Bunday holda, u
172.23.0.20
dan172.23.0.40
gacha bo'lgan diapazondir. LoadBalancer turidagi service yaratilganda, MetalLB ushbu diapazondan servicega IP belgilaydi(assign).
Rasmga e'tibor bersangiz k8s cluster serverlar internal IP si ko'rsatilgan va ular 172.23.0.0/24 network subnetda.
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: first-pool
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
ipAddressPools -> Ushbu L2Advertiment resursini oldindan belgilangan frist-pool IPAddressPool bilan bog'laydi. Ushbu bog'lanish MetalLBga ko'rsatilgan IP-manzil poolidan Layer 2(L2)advertisement uchun foydalanish kerakligi haqida xabar beradi.
Konfiguratsiya asosan IP-manzillar poolini (IPAddressPool) e'lon qiladi, undan MetalLB LoadBalancer tipidagi servicelar uchun manzillarni ajratishi mumkin. Shuningdek, u tarmoq qurilmalariga ko'rsatilgan IP manzillar poolidan foydalanish mumkinligi haqida xabar berish uchun Layer 2 advertisement (L2Advertisement) o'rnatadi. Bu, ayniqsa, IP-manzilni aniqlash uchun ARP (Address Resolution Protocol) ishlatiladigan muhitlarda foydalidir.
Ushbu konfiguratsiya MetalLB-ga tashqi IP-manzillarni LoadBalancer tipidagi servicelarga ajratish uchun belgilangan IP-manzil oralig'idan foydalanishni buyuradi va bu ma'lumotni tarmoqning to'g'ri rezolyutsiyasi uchun Layer 2 advertisement orqali yetkazadi. Bu tashqi trafikning Kubernetes klasteringizdagi tegishli podlarga yetib borishiga imkon berish uchun juda muhimdir.
6-> MetalLB konfiguratsiya faylimizni yozib bo'lganimizdan keyin uni apply qilib ishga tushiramiz.
kubectl apply -f address-pool.yaml
NGINX Ingress Controller
Nginx Ingress Controller Kubernetes resource va controlleri bo'lib, Kubernetes klasteridagi servicelarga tashqi kirishni boshqaradi. U aqlli HTTP va HTTPS trafik menejeri sifatida ishlaydi. NGINX Ingress Controller Kubernetes klasteridagi servicelarga tashqi kirishni boshqaradi va routing, load-balancing, SSL termination va boshqalarni boshqaradi.
1-> NGINX Ingress Controllerni helm orqali o'rnatib olamiz.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
Statusini ko'rish uchun
kubectl get svc -n ingress-nginx
kubectl get pods -n ingress-nginx
EXTERNAL IP sifatida MetalLB orqali bergan address pooldan 172.23.0.20 IPni oldi.
HAProxy
Biz HAProxydan tashqaridan kiruvchi tarfikni TCP orqali kubernetes cluster ichidagi Nginx ingresga yo'naltirish uchun foydalanamiz.
1-> HAProxy 2.8 LTS k8s-matser serverga o'rnatib olamiz.(Ubuntu 20.04 uchun)
sudo apt update && sudo apt upgrade -y
sudo apt-get install --no-install-recommends software-properties-common
sudo add-apt-repository ppa:vbernat/haproxy-2.8
sudo apt-get install haproxy=2.8.\*
2-> HAProxyni o'rnatib olganimizdan keyin /etc/haproxy/haproxy.cfg
ni ochamiz. haproxy.cfg
konfiguratsiya fayli dastlab default holda quyidagicha bo'ladi.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
3-> HAProxyni quyidagicha konfiguratsiya qilib NGINX Ingress Controllerimizni ulaymiz.
sudo nano /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode tcp
# option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend ingress
mode tcp
bind :80
default_backend ingress_servers
backend ingress_servers
mode tcp
server s1 172.23.0.20:80 check
frontend ingress_tls
mode tcp
bind :443
default_backend ingress_servers_tls
backend ingress_servers_tls
mode tcp
server s2 172.23.0.20:443 check
Qisqa qilib aytganda tashqaridan kirishlar HAProxyda TCP mode orqali kubernetes clusterimizdagi NGINX Ingress Controllerga kiradi. HAProxyga yuqorida ko'rsatilgan NGINX Ingress Controller EXTERNAL IPsi berilgan yani 172.23.0.20
4-> haproxy.cfg configuratsiyamizni tekshirib olamiz va haproxyga restart berib ishga tushiramiz.
haproxy -c -f /etc/haproxy/haproxy.cfg
sudo systemctl restart haproxy
sudo systemctl status haproxy
Ushbu arxitektura quyidagicha rasmda ko'rsatilganidek ishlaydi. NGINX Ingress k8s-master server ichida internal IPni olgan va shu bilan ishlaydi, tashqi kirib chiqish uchun esa HAProxy o'rnatib TCP modeda konfiguratsiya qildik, ya'ni tashqi kirish chiqishlar HAProxy orqali o'tadi.
Cert-Manager o'rnatish va sozlash
Kubernetes clusterimizga Cert-Manager o'rnatish bo'yicha devops-journey platformasida qo'llanma mavjud bo'lib ushu Kubernesga Cert-Manager o'rnatish va sozlash (opens in a new tab) qo'llamasidan foydalanib o'rnatib sozlab olishingiz mumkin.
Longhorn o'rnatish va sozlash
Kubernetes Longhorn - bu Kubernetes environmentlari uchun maxsus ishlab chiqilgan distributed block storage tizimi. Bu Kubernetes klasterlarida persistent storage(doimiy xotira) ehtiyojlari uchun ishonchli va kengaytiriladigan yechim bo'lib xizmat qiladi. Longhorn Kubernetes bilan muammosiz integratsiyalashib, platformada ishlaydigan stateful applicationlari uchun persistent storageni(doimiy xotira) ta'minlaydi.
Asosan, Longhorn Kubernetes-da storageni boshqarish uchun juda muhim xususiyatlarni taklif etadi, masalan, replikatsiya, snapshotlar, backuplar va volume expansion. Ushbu funktsiyalar ma'lumotlarning ishonchliligi(data reliability), availability va nosozliklarga chidamliligini ta'minlaydi. Longhorn yordamida admin/devopslar storage volumeni bevosita Kubernetes API-dan osongina ta'minlashi va boshqarishi mumkin, bu esa storageni boshqarish vazifalarini soddalashtiradi.
1-> Helm orqali Longhorn repositoriyani qo'shib olamiz va longhorn-system namespacega o'rnatamiz.
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
2-> Longhornni o'rnatib bo'lganimizdan keyin unin NGINX Ingress orqali domen ulab expose qilamiz.
Longhorn o'rnatilganidan keyin servicelarni ko'rib chiqaylik.
kubectl get svc -n longhorn-system
Biz ushbu servicelardan longhorn-frontend
ni expose qilamiz domen ulab.
longhorn nomli papka yaratib olamiz longhorn uchun va ichida longhorn-ingress.yaml nomli configuratsiya fayl ochib olamiz.
mkdir longhorn
cd longhorn
nano longhorn-ingress.yaml
longhorn-ingress.yaml konfiguratsiyamiz quyidagicha
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: "nginx"
rules:
- host: longhorn.xilol.uz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: longhorn-frontend
port:
number: 80
tls:
- hosts:
- longhorn.xilol.uz
secretName: longhorn-tls
DNS hostimizdan domenimizni k8s-master static IP manzilini ulaymiz va konfiguratsiyamizni apply qilamiz.
kubectl apply -f longhorn-ingress.yaml
konfiguratsiyamizni apply qilganimizdan keyin brauzerdan(Firefox tezroq) https://longhorn.xilol.uz
kirganingizda sizda Longhorn dashboardi ochilishi kerak.
Kubernetes Dashboard
Hozirda kubernetes clusterimiz ishlashga tayyor holga keldi va ishni boshlasak bo'ladi. Keling amaliyot uchun kubernetes clusterimizga kubernetes dashboard o'rnatib uni NGINX Ingress Controller orqali expose qilib Cert-Managerdan SSL/TLS sertifikat olib ishga tushiramiz.
1-> Kubernetes dashboard o'rnatish
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml --namespace=kubernetes-dashboard
Ushbu buyruq berilgan URLdan .yaml configuratsiyani yuklab oladi va kubernetes-dashboard nomli namespace ochib konfiguratsiya fayllarini kubernetes-dashboard namescpasega apply qilib ishga tushiradi.
2-> Service account ochib olamiz.
mkdir dashboard
cd dashboard
nano service-account.yaml
service-account.yaml
configuratsiyamiz quyidagicha
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
kubectl apply -f service-account.yaml
Ushbu YAML fayli kubernetes-dashboard namespacida admin-user nomli service-account yaratadi.
3-> admin-user
uchun ClusterRoleBinding yaratamiz.
cd dashboard
nano clusterrolebinding.yaml
clusterrolebinding.yaml
configuratsiyamiz.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Ushbu configuratsiya admin-user service accountini cluster-admin roliga bog'laydigan ClusterRoleBinding-ni yaratadi.
kubectl apply -f clusterrolebinding.yaml
4-> NGINX Ingress bilan Kubernetes dashboardni domen ulab expose qilamiz.
nano dashboard-ingress.yaml
dashboard-ingress.yaml
configuratsiyamiz.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.org/ssl-services: "kubernetes-dashboard"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: "nginx"
rules:
- host: dashboard.xilol.uz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
tls:
- hosts:
- dashboard.xilol.uz
secretName: k8s-dashboard-tls
Kodda ajratib ko'rsatilgan qatorlarga qarab qisqa tushuntirish. metadaga k8s-dashboard-ingress bu shunaqa nomli ingress qiladi pastki qatordagi namespaceda ko'rsatilgan kubernetes-dashboard namescpasida. Undan keyin Cert-Manager o'rnatib clusterissuer sozlaganimizda letsencrypt-prod qilganmiz shuni ishlatadi issuer uchun. ingressClassName bu nginx va hostda domenimizni yozamiz va unga kelgan barcha requestlarni kubernetes-dashboard namescpasegadagi kubernetes-dashboard servicega 443 portga yo'naltiramiz undan pastgi qismda esa tls sertifikat yaratib sozlanadi.
Rasmda kubernetes-dashboard namescpasedagi servicelar va portlari ko'rsatilgan. kubernetes-dashboard servisi 443 portda ishlab turibti.
kubectl get svc -n kubernetes-dashboard
Bu configuratsiya HTTPS yordamida Kubernetes dashboarnini xavfsiz ochish uchun NGINX annotationlari bilan ingress resourceni yaratadi. U hostni, TLS sozlamalarini va SSL redirectionni belgilaydi.
DNS hostingdan domengizga k8s-master static IP manzilini qo'shib qo'yasiz. Ushbu misolda menda xilol.uz domeni bor va men dashboard deb subdomen qo'shyapman. Rasmda ahost namunasi ko'rsatilgan 0.0.0.0 o'rniga k8s-matser serveringiz static IP manzilini qo'shasiz.
kubectl apply -f dashboard-ingress.yaml
Configuratsiyamizni apply qilib ishga tushirganimizdan keyin brauzerdan dashboard.xilol.uzga kirganimzida https kirishi kerak va bizga Kubernetes Dashboard ochilib kirish oynasi ochilishi kerak.
6-> Kubernetes dashboardga kirish uchun token olamiz
kubectl -n kubernetes-dashboard create token admin-user
TOKEN olganimizdan keyin Kubernetes Dashboardga kirishimiz mumkin.
Kubernetes Monitoring
Kubernetes klasterimizni monitoring qilishimiz uchun platformamizda Kubernetes Monitoring (opens in a new tab) qo'llanmamizni ko'rib chiqishingiz mumkin.
Nihoyat bizda ishlashga tayyor kubernetes klaster bor. Bu juda oson va qiyinchiliksiz bo'ldi desak ham bo'ladi. Ushbu amaliyotda on-primese serverlarda Kubespray bilan Kubernetes cluster yaratdik va Networkni sozladik. Biz on-primese serverlarda network solution uchun Flanneldan LB uchun MetalLBdan tashqi trafik uchun esa NGINX Ingress Controller va HAProxydan foydalanib konfiguratsiya qildik. Applicationlarimiz uchun avtomatik bepul SSL/TLS sertifikatlar olish uchun Cert-Manager o'rnatib sozladik. Klasterimizni monitoring qilish uchun Prometheus/Grafana stackni qo'lladik. Klasterimiz ishlashga tayyor bo'lganida esa amaliyot uchun Kubernetes Dashboard o'rnatib domen ulab SSL sertifikat olib ishlashga tayyor qildik.
Qo'shimcha
- Kubernetesga Kirish (opens in a new tab)
- Kubernetes Arxitekturasi (opens in a new tab)
- Kubernetes Obyektlari (opens in a new tab)
- Kubernetes klaster yaratish va sozlash(kubeadm) (opens in a new tab)
- Kubernesga Cert-Manager o'rnatish va sozlash (opens in a new tab)
- Kubernetes Monitoring (opens in a new tab)
- Kubernetesga Argo CD o'rnatish va sozlash (opens in a new tab)
- Kuberntes CI/CD | Github Actions + Argo CD | GitOps (opens in a new tab)
Sana: 2024.04.11(2024-yil 11-aprel)
Oxirgi yangilanish: 2024.04.11(2024-yil 11-aprel)
Muallif: Otabek Ismoilov
Telegram (opens in a new tab) | Github (opens in a new tab) | LinkedIn (opens in a new tab) |
---|