Skip to content
Dokumentatsiya
Kubernetes Cluster sozlash

Kubernetes klaster yaratish va sozlash(kubeadm)

kubeadm yordamida on-premise serverlarda Kubernetes klasterini yaratish va sozlash bo'yicha to'liq qo'llanma.

Zamonaviy IT infratuzilmasining dinamik landshaftida konteynerlashtirilgan applicationlarni tartibga solish va boshqarish masshtablilik(scalability), ishonchlilik(reliability) va deployment qulayligini ta'minlashning asosiy jihatiga aylandi. Kubernetes, open-source konteyner orkestrlari platformasi sanoat standarti sifatida paydo bo'lib, tashkilotlarga konteynerlarni boshqarish va orkestrlashni soddalashtirish imkonini beradi.

Ushbu qo'llanma on-premiseda kubeadm yordamida Kubernetes klasterini yaratish va sozlash bo'yicha bosqichma-bosqich qo'llanmani taqdim etishga qaratilgan. Kubeadm - bu Kubernetes klasterini yuklash jarayonini soddalashtirish uchun maxsus ishlab chiqilgan tool bo'lib, uni yangi boshlanuvchilar va tajribali mutaxassislar uchun qulay.

System administrator/DevOps Engineer yoki oddiygina konteyner orkestratsiyasi olamini o'rganishni istagan odam bo'lasizmi, ushbu qo'llanma sizni on-premise infratuzilmangizda Kubernetes klasterini o'rnatish jarayoni bo'ylab ko'rsatib beradi. Ushbu qo'llanmaning oxirida siz konteynerlashtirilgan ilovalarni osongina joylashtirish va boshqarishga tayyor ishlaydigan Kubernetes klasteriga ega bo'lasiz.

Ishni boshlash

Ushbu amaliyotda biz on-primese serverlarda kubeadm bilan kubernetes cluster yaratimiz va ishlashga tayyor qilib sozlaymiz. Bu amaliyotda biz HAProxy, MetalLB, NGINX Ingress, flannel, HELM, Longhorn va boshqa texnologiyalarni ishlatamiz.

Ushbu amaliyot uchun kerak bo'ladigan serverlar.

Minimum Server talabi

OSRAMCPUXotiraStatic IPServer nomi
Ubuntu 20.048GB8vCPU 4 core50GBHa kerakk8s-master
Ubuntu 20.044GB4vCPU 2 core50GBshart emask8s-node1
Ubuntu 20.044GB4vCPU 2 core50GBshart emask8s-node2
Ubuntu 20.044GB4vCPU 2 core50GBshart emask8s-node3

Umumiy hisobda bizga 4ta server, bitta k8s-master server va minimum uchta node server(k8s-node1,k8s-node2,k8s-node3) kerak bo'ladi.

k8s-master serverimizga static IP bo'lishi kerak qolgan k8s nodelar esa bitta networkda bitta subnetda bo'lsa shart emas agar har xil network subnetda bo'lishsa static IP kerak bo'ladi.

Quyida meni kubernetes cluster uchun tayyorlab olgan serverlarim ro'yxati. E'tibor berib qarasangiz ularning Internel IP manzili 10.28.0.0/24 network subnetda bu degani ushbu serverlar bir biri bilan Internal IPsi orqali bo'glana oladi. Shuning uchun menga bitta k8s-master sererimda static IP manzil bor. k8s-master serverimga static IP berishimni sababi boshqa serverlar bilan aloqa qilish(masalan Load Balancer) va applicationlar domen ulab NGINX ingress bilan expose qilish uchun kerak.

kubeadm bilan k8s cluster yaratish

Amaliyotimiz uchun muhitni serverlarni sozlab olganimizdan keyin amaliyotni boshlasak bo'ladi. Biz kubernetes cluster yaratishda kubeadm dan foydalanamiz. Ishni ancha osonlashtirish uchun men oldinroq kubeadm bilan ishlab kubernetes cluster yaratadigan bash script yozib qo'ygandim. Ushbu amaliyotda biz bash scriptdan foydalanaib k8s cluster yaratamiz. Barcha serverlarimizga ssh orqali kirib olamiz va quyidagicha scriptni ishga tushiramiz.

K8s installer script ikki qismga bo'lingan yani k8s-master uchun alohida va k8s-nodelar uchun alohida. K8s installer (opens in a new tab) bash scriptni kodlari open-source githubga joylashtirilgan.

1-> k8s-master serverimizga kirib quyidagi buyruq orqali scriptni ishga tushiramiz.

curl -sSL https://raw.githubusercontent.com/ismoilovdevml/devops-tools/master/Kubernetes/k8s-installer/k8s-master.sh | bash

Script quyidagicha ishga tushishi kerak.

Bash script o'z ishini muvaffaqiyatli yakunlagida quyidagicha ko'rinishi kerak.

2-> k8s-masterda bash script muvvaqiyatli ishini yakunlagida k8s-node serverlarni ulash uchun kubeadm join buryruq generatsiya qilib beradi. E'tiborli bo'lib ushbu buyruqni nusxalab olamiz bu orqali kubernetes node sereverlarni k8s-masterga ulay olamiz.

Buyruq rasmda ko'rsatilgan.

kubeadm join 10.128.0.45:6443 --token snydd0.hdwia5qbrc455aro \
         --discovery-token-ca-cert-hash sha256:ac02bc09d36903d97ceff3e5437b5386dcc5bf25ed90c8ce03dee9ae80061672

3-> k8s nodelar uchun alohida k8s nodelar uchun yozilgan bash scriptni ishga tushiramiz.

curl -sSL https://raw.githubusercontent.com/ismoilovdevml/devops-tools/master/Kubernetes/k8s-installer/k8s-worker.sh | bash

Bash script quyidagicha ishga tushishi kerak.

Bash script o'z ishini muvaffaqiyatli yakunlaganida quyidagicha ko'rinishi kerak.

4-> Bash scrip k8s-master va barcha nodelarda muvvaqiyatli ishga tushganidan keyin k8s-masterda berilgan kubeadm join dan foydalani k8s-masterimizga k8s nodelarimizni qo'shib chiqishimiz kerak, ya'ni quyidagi bizga berilgan buyruqni node serverlarimizga sudo bilan ishga tushirishiniz kerak.

sudo kubeadm join 10.128.0.45:6443 --token snydd0.hdwia5qbrc455aro --discovery-token-ca-cert-hash sha256:ac02bc09d36903d97ceff3e5437b5386dcc5bf25ed90c8ce03dee9ae80061672

Barcha nodelarimizda shu buyruq mauvaqqiyatli bajarilganidan keyin uni tekshirib ko'ramiz.

5-> K8s node serverlarimiz k8s-masterga qo'shilganini ko'rish uchun k8s-master serverimizga kirib quyidagi buyruq orqali k8s nodelarni ko'ramiz.

kubectl get nodes

Bu holatda ulangan nodelar ko'rsatiladi lekin STATUS NotReady bo'lib turibti.

Network sozlash

Kubernetes klasterini kubeadm yordamida yaratganingizda, klaster dastlab asosiy komponentlar bilan o'rnatiladi, lekin u default bo'yicha sozlangan networking solutioniga ega bo'lmaydi. Networking solutiondagi turli nodelar o'rtasidagi aloqa va ushbu nodelarda ishlaydigan podlar bir-biri bilan aloqa qilish uchun juda muhimdir. Bu holatda, kubectl get nodes buyrug'ini ishga tushirganingizda, network solution joyida bo'lmagani uchun nodelar tayyor emas. NotReady holati nodelar hali to'liq ishlamaganligini bildiradi.

Biz bu holatda Flannel ishlatamiz.

1-> Kubernetes clutserimizga flannelni quyidagicha o'rnatamiz.

ESLATMA: Bundan keyin faqat k8s-master serverimizda ishlaymiz.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Flannelni muvaffaqiyatli ishga tushirib kubectl get nodes buyru'gi bilan tekshirganimizda barcha nodelarimiz STATUSsi Ready bo'lib turganini ko'rishimiz mumkin.

2-> Kubernetes klasterimzda qancha podlarimiz ishlab turganini ko'rish uchun quyidagi buyruqdan foydalanamiz.

kubectl get pods -A

MetalLB

MetalLB - bare metal Kubernetes klasterlari uchun load balancer. On-primese muhitlarda, ayniqsa bare metal infratuzilmasi bo'lganlarda, servicelar ko'pincha tashqi ko'rinishga ega bo'lish usuliga muhtoj. Boshqariladigan servicelar sifatida load-balancerlari mavjud cloud muhitlardan farqli o'laroq, on-premise sozlashlarda bunday infratuzilma bo'lmasligi mumkin.

Kubernetes klasterini bare-metallga o'rnatish uchun kubeadm dan foydalanilganda, LoadBalancer turidagi servicelar ishlamaydi, chunki tashqi load balanceri mavjud emas. Bu yerda MetalLB keladi. MetalLB on-primese muhitda yaxshi ishlaydigan dasturiy ta'minotga asoslangan load-balancerni ta'minlaydi. U tashqi trafikni klasterdagi nodelar bo'ylab tarqatadi.

3-> MetalLB'ni o'rnatishdan oldin kube-proxy Configmapni sozlab olishimiz kerak.

kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system

kube-proxy ConfigMap-ni o'zgartirish: kubectl get configmap kube-proxy buyruqlari joriy kube-proxy ConfigMapni olish uchun ishlatiladi, so'ngra sed strictARP qiymatini falsedan truega o'zgartirish uchun ishlatiladi. Ushbu modifikatsiya MetalLB to'g'ri ishlashi uchun talab qilinadi. Keyin kubectl apply -f - buyrug'i o'zgartirilgan ConfigMapni qo'llash uchun ishlatiladi.

kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

4-> MetalLBdan oldin sozlash yakunlangandan keyin MetalLBni helm orqali o'rnatib olamiz. Bash script ishga tushirib kubernetes cluter yaratganimizda helm ham o'rnatiladi ya'ni helm o'rnatishga hojat yo'q.

helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm install metallb metallb/metallb --namespace metallb-system --create-namespace

helm metallb repositoriyani qo'shadi va repositoriyalarni yangilab oladi, keyin metallb nomli namespaceda metallbni o'rnatadi va ishga tushiradi. Har bir dastur yoki tool uchun alohida namespace ochib ishlash Kubernetesda yaxshi amaliyot.

MetalLB ishlab turgani bilish uchun statusini ko'ramiz.

# barcha namescpaselar ro'yxatini ko'rish uchun
kubectl get ns
# metallb-system namespace podlarini ko'rish
kubectl get pods -n metallb-system

5-> MetalLB o'rnatganimizdan keyin MetalLB Address Pool sozlashimiz kerak.

metallb nomli jild ochib address-pool.yaml fayl ochib yaml configuratsiya yozamiz.

mkdir metallb
cd metallb
nano address-pool.yaml

MetalLB Address Pool configuratsiyamiz quyidagicha. MetalLB rasmiy qo'llanmasi (opens in a new tab)

metallb/address-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.128.0.200-10.128.0.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

Ushbu konfiguratsiya ikki qismdan iborat IPAddressPoll va L2Advertisement.

metallb/address-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.128.0.200-10.128.0.220
  • apiVersion vs kind Bu maydonlar API versiyasi va resource turini belgilaydi. Bu holda, bu metallb.io/v1beta1 API guruhidagi IPAddressPool.

  • metadata -> Resource, jumladan uning nomi va namespace haqidagi metama'lumotlarini o'z ichiga oladi.

  • spec -> Ushbu bo'lim resourcening kerakli holatini belgilaydi.

  • addresses -> MetalLB LoadBalancer turidagi servicelarga ajratishi mumkin bo'lgan bir qator IP manzillarni belgilaydi. Bunday holda, u 10.128.0.200 dan 10.128.0.220 gacha bo'lgan diapazondir. LoadBalancer turidagi service yaratilganda, MetalLB ushbu diapazondan servicega IP belgilaydi(assign).

Rasmga e'tibor bersangiz k8s cluster serverlar internal IP si ko'rsatilgan va ular 10.28.0.0/24 network subnetda.

metallb/address-pool.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool

ipAddressPools -> Ushbu L2Advertiment resursini oldindan belgilangan frist-pool IPAddressPool bilan bog'laydi. Ushbu bog'lanish MetalLBga ko'rsatilgan IP-manzil poolidan Layer 2(L2)advertisement uchun foydalanish kerakligi haqida xabar beradi.

Konfiguratsiya asosan IP-manzillar poolini (IPAddressPool) e'lon qiladi, undan MetalLB LoadBalancer tipidagi servicelar uchun manzillarni ajratishi mumkin. Shuningdek, u tarmoq qurilmalariga ko'rsatilgan IP manzillar poolidan foydalanish mumkinligi haqida xabar berish uchun Layer 2 advertisement (L2Advertisement) o'rnatadi. Bu, ayniqsa, IP-manzilni aniqlash uchun ARP (Address Resolution Protocol) ishlatiladigan muhitlarda foydalidir.

Ushbu konfiguratsiya MetalLB-ga tashqi IP-manzillarni LoadBalancer tipidagi servicelarga ajratish uchun belgilangan IP-manzil oralig'idan foydalanishni buyuradi va bu ma'lumotni tarmoqning to'g'ri rezolyutsiyasi uchun Layer 2 advertisement orqali yetkazadi. Bu tashqi trafikning Kubernetes klasteringizdagi tegishli podlarga yetib borishiga imkon berish uchun juda muhimdir.

6-> MetalLB konfiguratsiya faylimizni yozib bo'lganimizdan keyin uni apply qilib ishga tushiramiz.

kubectl apply -f address-pool.yaml

NGINX Ingress Controller

Nginx Ingress Controller Kubernetes resource va controlleri bo'lib, Kubernetes klasteridagi servicelarga tashqi kirishni boshqaradi. U aqlli HTTP va HTTPS trafik menejeri sifatida ishlaydi. NGINX Ingress Controller Kubernetes klasteridagi servicelarga tashqi kirishni boshqaradi va routing, load-balancing, SSL termination va boshqalarni boshqaradi.

1-> NGINX Ingress Controllerni helm orqali o'rnatib olamiz.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

Statusini ko'rish uchun

kubectl get svc -n ingress-nginx
kubectl get pods -n ingress-nginx

EXTERNAL IP sifatida MetalLB orqali bergan address pooldan 10.128.0.200 IPni oldi.

HAProxy

Tashqaridan kiruvchi tarfikni TCP orqali kubernetes cluster ichidagi ingresga yo'naltirish uchun foydalanamiz.

1-> HAProxy 2.8 LTS k8s-matser serverga o'rnatib olamiz.(Ubuntu 20.04 uchun)

sudo apt update && sudo apt upgrade -y
sudo apt-get install --no-install-recommends software-properties-common
sudo add-apt-repository ppa:vbernat/haproxy-2.8
sudo apt-get install haproxy=2.8.\*

2-> HAProxyni o'rnatib olganimizdan keyin /etc/haproxy/haproxy.cfg ni ochamiz. haproxy.cfg konfiguratsiya fayli dastlab default holda quyidagicha bo'ladi.

/etc/haproxy/haproxy.cfg
global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon
 
	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private
 
	# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
    ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
    ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
    ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
 
defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

3-> HAProxyni quyidagicha konfiguratsiya qilib NGINX Ingress Controllerimizni ulaymiz.

sudo nano /etc/haproxy/haproxy.cfg
/etc/haproxy/haproxy.cfg
global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon
 
    # Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private
 
	# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
    ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
    ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
    ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
 
defaults
	log	global
	mode	tcp
#	option	httplog
	option	dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http
 
 
frontend ingress
   mode tcp
   bind :80
   default_backend ingress_servers
 
backend ingress_servers
   mode tcp
   server s1 10.128.0.200:80 check
 
frontend ingress_tls
   mode tcp
   bind :443
   default_backend ingress_servers_tls
 
backend ingress_servers_tls
   mode tcp
   server s2 10.128.0.200:443 check

Qisqa qilib aytganda tashqaridan kirishlar HAProxyda TCP mode orqali kubernetes clusterimizdagi NGINX Ingress Controllerga kiradi. HAProxyga yuqorida ko'rsatilgan NGINX Ingress Controller EXTERNAL IPsi berilgan yani 10.128.0.200

4-> haproxy.cfg configuratsiyamizni tekshirib olamiz va haproxyga restart berib ishga tushiramiz.

haproxy -c -f /etc/haproxy/haproxy.cfg
sudo systemctl restart haproxy
sudo systemctl status haproxy

Ushbu arxitektura quyidagicha rasmda ko'rsatilganidek ishlaydi. NGINX Ingress k8s-master server ichida internal IPni olgan va shu bilan ishlaydi, tashqi kirib chiqish uchun esa HAProxy o'rnatib TCP modeda konfiguratsiya qildik, Ya'ni tashqi kirish chiqishlar HAProxy orqali o'tadi.

Cert-Manager o'rnatish va sozlash

Kubernetes clusterimizga Cert-Manager o'rnatish bo'yicha devops-journey platformasida qo'llanma mavjud bo'lib ushu Kubernesga Cert-Manager o'rnatish va sozlash (opens in a new tab) qo'llamasidan foydalanib o'rnatib sozlab olishingiz mumkin.

Longhorn o'rnatish va sozlash

Kubernetes Longhorn - bu Kubernetes environmentlari uchun maxsus ishlab chiqilgan distributed block storage tizimi. Bu Kubernetes klasterlarida persistent storage(doimiy xotira) ehtiyojlari uchun ishonchli va kengaytiriladigan yechim bo'lib xizmat qiladi. Longhorn Kubernetes bilan muammosiz integratsiyalashib, platformada ishlaydigan stateful applicationlari uchun persistent storageni(doimiy xotira) ta'minlaydi.

Asosan, Longhorn Kubernetes-da storageni boshqarish uchun juda muhim xususiyatlarni taklif etadi, masalan, replikatsiya, snapshotlar, backuplar va volume expansion. Ushbu funktsiyalar ma'lumotlarning ishonchliligi(data reliability), availability va nosozliklarga chidamliligini ta'minlaydi. Longhorn yordamida admin/devopslar storage volumeni bevosita Kubernetes API-dan osongina ta'minlashi va boshqarishi mumkin, bu esa storageni boshqarish vazifalarini soddalashtiradi.

1-> Helm orqali Longhorn repositoriyani qo'shib olamiz va longhorn-system namespacega o'rnatamiz.

helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace

2-> Longhornni o'rnatib bo'lganimizdan keyin unin NGINX Ingress orqali domen ulab expose qilamiz.

Longhorn o'rnatilganidan keyin servicelarni ko'rib chiqaylik.

kubectl get svc -n longhorn-system

Biz ushbu servicelardan longhorn-frontendni expose qilamiz domen ulab.

longhorn nomli papka yaratib olamiz longhorn uchun va ichida longhorn-ingress.yaml nomli configuratsiya fayl ochib olamiz.

mkdir longhorn
cd longhorn
nano longhorn-ingress.yaml

longhorn-ingress.yaml konfiguratsiyamiz quyidagicha

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: longhorn-ingress
  namespace: longhorn-system
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: "nginx"
  rules:
  - host: longhorn.xilol.uz
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: longhorn-frontend
            port:
              number: 80
  tls:
  - hosts:
    - longhorn.xilol.uz
    secretName: longhorn-tls

DNS hostimizdan domenimizni k8s-master static IP manzilini ulaymiz va konfiguratsiyamizni apply qilamiz.

kubectl apply -f longhorn-ingress.yaml

konfiguratsiyamizni apply qilganimizdan keyin brauzerdan(Firefox tezroq) https://longhorn.xilol.uz kirganingizda sizda Longhorn dashboardi ochilishi kerak.

Kubernetes Dashboard

Hozirda kubernetes clusterimiz ishlashga tayyor holga keldi va ishni boshlasak bo'ladi. Keling amaliyot uchun kubernetes clusterimizga kubernetes dashboard o'rnatib uni NGINX Ingress Controller orqali expose qilib Cert-Managerdan SSL/TLS sertifikat olib ishga tushiramiz.

1-> Kubernetes dashboard o'rnatish

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml --namespace=kubernetes-dashboard

Ushbu buyruq berilgan URLdan .yaml configuratsiyani yuklab oladi va kubernetes-dashboard nomli namespace ochib konfiguratsiya fayllarini kubernetes-dashboard namescpasega apply qilib ishga tushiradi.

2-> Service account ochib olamiz.

mkdir dashboard
cd dashboard
nano service-account.yaml

service-account.yaml configuratsiyamiz quyidagicha

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f service-account.yaml

Ushbu YAML fayli kubernetes-dashboard namespacida admin-user nomli service-account yaratadi.

3-> admin-user uchun ClusterRoleBinding yaratamiz.

cd dashboard
nano clusterrolebinding.yaml

clusterrolebinding.yaml configuratsiyamiz.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Ushbu configuratsiya admin-user service accountini cluster-admin roliga bog'laydigan ClusterRoleBinding-ni yaratadi.

kubectl apply -f clusterrolebinding.yaml

4-> NGINX Ingress bilan Kubernetes dashboardni domen ulab expose qilamiz.

nano dashboard-ingress.yaml

dashboard-ingress.yaml configuratsiyamiz.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-dashboard-ingress
  namespace: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.org/ssl-services: "kubernetes-dashboard"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: "nginx"
  rules:
  - host: dashboard.xilol.uz
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443
  tls:
  - hosts:
    - dashboard.xilol.uz
    secretName: k8s-dashboard-tls

Kodda ajratib ko'rsatilgan qatorlarga qarab qisqa tushuntirish. metadaga k8s-dashboard-ingress bu shunaqa nomli ingress qiladi pastki qatordagi namespaceda ko'rsatilgan kubernetes-dashboard namescpasida. Undan keyin Cert-Manager o'rnatib clusterissuer sozlaganimizda letsencrypt-prod qilganmiz shuni ishlatadi issuer uchun. ingressClassName bu nginx va hostda domenimizni yozamiz va unga kelgan barcha requestlarni kubernetes-dashboard namescpasegadagi kubernetes-dashboard servicega 443 portga yo'naltiramiz undan pastgi qismda esa tls sertifikat yaratib sozlanadi.

Rasmda kubernetes-dashboard namescpasedagi servicelar va portlari ko'rsatilgan. kubernetes-dashboard servisi 443 portda ishlab turibti.

Bu configuratsiya HTTPS yordamida Kubernetes dashboarnini xavfsiz ochish uchun NGINX annotationlari bilan ingress resourceni yaratadi. U hostni, TLS sozlamalarini va SSL redirectionni belgilaydi.

DNS hostingdan domengizga k8s-master static IP manzilini qo'shib qo'yasiz. Ushbu misolda menda xilol.uz domeni bor va men dashboard deb subdomen qo'shyapman. Rasmda ahost namunasi ko'rsatilgan 0.0.0.0 o'rniga k8s-matser serveringiz static IP manzilini qo'shasiz.

kubectl apply -f dashboard-ingress.yaml

Configuratsiyamizni apply qilib ishga tushirganimizdan keyin brauzerdan dashboard.xilol.uzga kirganimzida https kirishi kerak va bizga Kubernetes Dashboard ochilib kirish oynasi ochilishi kerak.

6-> Kubernetes dashboardga kirish uchun token olamiz

kubectl -n kubernetes-dashboard create token admin-user

TOKEN olganimizdan keyin Kubernetes Dashboardga kirishimiz mumkin.

Nihoyat bizda ishlashga tayyor kubernetes klaster bor. Bu juda oson va qiyinchiliksiz bo'ldi desak ham bo'ladi. Ushbu amaliyotda on-primese serverlarda kubeadm bilan Kubernetes cluster yaratdik va Networkni sozladik. Biz on-primese serverlarda network solution uchun Flanneldan LB uchun MetalLBdan tashqi trafik uchun esa NGINX Ingress Controller va HAProxydan foydalanib konfiguratsiya qildik. Applicationlarimiz uchun avtomatik bepul SSL/TLS sertifikatlar olish uchun Cert-Manager o'rnatib sozladik. Klasterimiz ishlashga tayyor bo'lganida esa amaliyot uchun Kubernetes Dashboard o'rnatib domen ulab SSL sertifikat olib ishlashga tayyor qildik.

Qo'shimcha