Skip to content
Dokumentatsiya
PostgreSQL Cluster yaratish

Patroni va HAProxy yordamida Highly Available PostgreSQL clusterini yaratish

postgres-cluster

Kirish

Bugungi raqamli landshaftda ma'lumotlar bazalarining high availability va ishonchliligini ta'minlash barcha o'lchamdagi korxonalar va tashkilotlar uchun muhim ahamiyatga ega. O'zining mustahkamligi, scalability va extensibility bilan mashhur bo'lgan PostgreSQL ko'plab muhim application uchun asos bo'lib xizmat qiladi. PostgreSQL ma'lumotlar bazalarining ishlash muddati va barqarorligini oshirish uchun highly available bo'lgan klaster arxitekturasini o'rnatishga ehtiyoj seziladi.

Ushbu texnik qo'llanma sizga Patroni va HAProxy-dan foydalangan holda highly available bo'lgan PostgreSQL klasterini yaratish jarayonini o'tkazishga qaratilgan.

Ushbu qo'llanmaning oxirida siz bardoshli PostgreSQL klaster arxitekturasini qanday loyihalash, amalga oshirish va qo'llab-quvvatlash haqida tushunchaga ega bo'lasiz, bu sizga ma'lumotlar bazasi infratuzilmangizda yuqori darajadagi mavjudlik(high availability) va xatolarga chidamlilik(tolerance) talablarini qondirish imkonini beradi. Keling, potentsial nosozliklarga chidamli va qimmatli ma'lumotlaringizga uzluksiz kirishni ta'minlaydigan mustahkam PostgreSQL klasterini yaratish yo'lida ushbu sayohatni boshlaylik.

Ishni Boshlash

Patroni - bu PostgreSQL klasterlarini boshqarishni avtomatlashtirish uchun mo'ljallangan open-source tool bo'lib, high availability va barqarorlikni ta'minlashga qaratilgan. U PostgreSQL instancelarini bir nechta nodelar bo'ylab joylashtirish va ularga xizmat ko'rsatishni, ularni doimiy ravishda helath check qilib kuzatib borish va nosozliklar yoki jiddiy muammolar yuzaga kelganda avtomatlashtirilgan uzilish protseduralarini tartibga solish orqali soddalashtiradi.

Patronining asosiy xususiyatlari quyidagilardan iborat: avtomatlashtirilgan uzilish, taqsimlangan consensus algoritmlaridan foydalangan holda leaderni tanlash, markazlashtirilgan konfiguratsiyani boshqarish, pgBouncer va HAProxy kabi ekotizim toollari bilan uzluksiz integratsiya va sozlash uchun kengaytirilganlik.

PgBouncer - bu PostgreSQL ma'lumotlar bazalari uchun lightweight connection pooler. U client applicationlari va PostgreSQL ma'lumotlar bazasi serverlari o'rtasida joylashgan bo'lib, ma'lumotlar bazasi ulanishlarini boshqaradigan va optimallashtiradigan vositachi sifatida ishlaydi.U ma'lumotlar bazasi arxitekturalarida, ayniqsa ulanishning uzilishi yoki talab qilinadigan ish yuklari bo'lgan muhitlarda muhim komponent bo'lib xizmat qiladi, ma'lumotlar bazasi resurslaridan samarali foydalanish va client applicationlari uchun javob berish qobiliyatini oshiradi.

etcd - yuqori ishonchli distributed systemlarni yaratish uchun mo'ljallangan distributed key-value store. CoreOS tomonidan ishlab chiqilgan bo'lib, u barqarorlik va nosozliklarga chidamliligini ta'minlagan holda ma'lumotlarni machinelar klasterida saqlashning ishonchli usulini ta'minlaydi. Oddiy API va mustahkam dizayni bilan etcd distributed applicationlarni ishlab chiqishni soddalashtiradi va dinamik muhitda muammosiz masshtablash va moslashuvchanlikni ta'minlaydi.

Ushbu amaliyotda biz production uchun yuqorida darajada yozilgan github.com/vitabaks/postgresql_cluster (opens in a new tab) Ansible playbookdan foydalanamiz. Ushbu amaliyot uchun moslangan versiyasi github.com/devops-journey-uz/postgresql_cluster (opens in a new tab)

Ushbu Ansible playbookni qo'llab quvvatlaydigan serverlar

  • Debian: 10, 11, 12
  • Ubuntu: 18.04, 20.04, 22.04
  • CentOS: 7, 8
  • CentOS Stream: 8, 9
  • Oracle Linux: 7, 8, 9
  • Rocky Linux: 8, 9
  • AlmaLinux: 8, 9

Arxitektura

Ushbu amaliyotda quyidagi rasmdagi arxitekturani quramiz. Bizda bitta etcd va bitta load-balancer serverimiz bo'ladi. Ikkita postgres replica va bitta masterimiz bo'ladi.

postgres-cluster

Quyidagi rasmda esa network arxitekturamiz.

postgres-cluster

Bizda Postgresql clusterimiz uchun db-networkkimiz bor uni ichida esa db-subnet nomli 172.20.0.0/16 IP diapozonli subnetimiz bor. Barcha serverlarimiz bir biriga internal IP bilan bo'glanib ishlaydi, bitta load-balancer serverimizda static(public) IP address bo'ladi holos. Agar siz Azure ishlatsangiz Azureda Network sozlash (opens in a new tab) amaliyotidan quyidagi networkni qurib olishingiz mumkin.

Environment sozlash

Ushbu amaliyotni cloud va on-primise environmentda qilishimiz mumkin. Buning uchun bizga bitta network va subnet kerak bo'ladi serverlarimiz internal IP bilan bog'lanishi uchun. Bu amaliyot uchun minimum 6-ta server kerak bo'ladi.

Ushbu amaliyot uchun ishlatilgan server IP manzillarimiz namuna hisoblanadi qo'llanmada chalg'imasligingiz uchun.

RAMCPUXotiraInternal IPServer Nomi
8GB4vCPU 2 core100GB170.20.0.4etcd
8GB4vCPU 2 core50GB170.20.0.5load-balancer
8GB4vCPU 2 core50GB170.20.0.6node1
8GB4vCPU 2 core50GB170.20.0.7node2
8GB4vCPU 2 core50GB170.20.0.8node3
4GB2vCPU 2 core30GB170.20.0.9ansible

Network sozlash

Ushbu amaliyotda birinchi qiladigan ishimiz bu network sozlashdir keling ishni network sozlashdan boshlaymiz.

Siz quyidagidek network sozlab olishingiz kerak bo'ladi barcha serverlar bitta networkda bitta subnetda bo'lishi va bir bir bilan internal IP bilan bog'lanib ishlashi kerak faqat load-balancer serverimizda static(public) IP address bo'ladi ya'ni tashqaridan ulana olish uchun. Ushnu network arxitektura PostgreSQL clusterimizni kengaytirishda ham ko'p qulaylik beradi.

postgres-cluster

Ushbu network arxitekturani on-primise va cloudda ham sozlashingiz mumkin. Agar siz Microsoft Azure ishlatsangiz Azure Network sozlash (opens in a new tab) qo'llanmasidan quyidagi networkni sozlab olishingiz mumkin.

Ansible server sozlash

Network sozlab olganimizdan keyin shu networkda bitta ansible server yaratib olamiz. Barcha ishni shu ansible server orqali bajaramiz va ansible server boshqa serverlarga internal IP bilan ulana olishi kerak.

Ansible serverimizga ansible o'rnatib olamiz.

sudo apt update && sudo apt upgrade -y
sudo apt-get install software-properties-common python3-pip sshpass git
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible

Ansible o'rnatib olganimizdan keyin Ansible serverimizda ssh key generatsiya qilib barcha serverlarga root user bilan ssh orqali ulanishini sozlab chiqishimiz kerak. Buning uchun ansible servermizda ssh key generatsiya qilamiz va public keyni ansible serverimiz authorized_keys'gan qo'shib qo'yamiz. Ansible serverimizdagi public ssh keyimizni esa barcha serverlarda root userga kirib authorized_keys'ga qo'shib chiqamiz va har bir serverga root userdan ansible serverdan ulanib ko'ramiz.

1-> Ansible serverdan

ssh-keygen -f ~/.ssh/db-cluster
cd ~/.ssh/
cat db-cluster.pub >> authorized_keys

Ansible serverdagi db-cluster.pub keyni nusxalab olamiz.

cd ~/.ssh/
cat db-cluster.pub

2-> Boshqa serverlarda. Ansible boshqa serverlarga root user bilan kira olishi kerak. Barcha serverlarda db-cluster.pub keyimizni serverlar authorized_keys'ga qo'shib chiqganimzidan keyin internal ip bilan ssh orqali bog'lanishni tekshirib ko'ramiz.

sudo su
cat >> ~/.ssh/authorized_keys <<EOF
ssh-rsa ssh-keyimizni joylashtiramiz
EOF

Cluster yaratish

Ansible serverimizni sozlab boshqa sereverlar bilan ssh ulanishni bog'laganimizdan keyin Postgresql cluster yaratishni boshlasak bo'ladi. github.com/vitabaks/postgresql_cluster (opens in a new tab) ansible playbookni ansible serverimizda git clone qilib olamiz va o'zimizga moslab configuratsiya qilib olamiz.

1-> Ansible serverimizda ansible playbookni git clone qilib olamiz.

git clone https://github.com/vitabaks/postgresql_cluster.git
cd postgresql_cluster/

2-> inventory konfiguratsiya fayliga serverlarimiz internal IP addresslarini qo'shib chiqamiz.

serverInternal IP
etcd170.20.0.4
load-balancer170.20.0.5
node1170.20.0.6
node2170.20.0.7
node3170.20.0.8

postgres-cluster

3-> postgresql_cluster/vars/system.yml konfiguratsiya faylidan timezonani belgilaymiz.

postgres-cluster

4-> postgresql_cluster/vars/main.yml konfiguratsiya faylidan postgres pgbouncer parollarnini belgilaymiz.

postgres-cluster

postgres-cluster

5-> Barchasini o'zimizga moslab sozlab olganimizdan keyin barcha serverlarga ulanish bor yoki yo'qligini tekshiramiz.

postgres-cluster

6-> Yuqoridagidek barcha serverlarga ansible serverimiz muvaffaqiyatli ulana olaganidan keyin deploy_pgcluster.yml orqali TimescaleDB bilan postgres clusterni yaratishni boshlaymiz. Bu serverlar holati va internet tezligi qarab vaqt oladi.

-> Timescale PostgreSQL ++ (opens in a new tab)

ansible-playbook deploy_pgcluster.yml -e "enable_timescale=true"

7-> Jarayon muvaffaqiyatli tugaganida quyidagicha ko'rinishda bo'ladi.

postgres-cluster postgres-cluster

Okey bizda hammasi muvaffaqiyatli yakunlandi keling endi etcd, patroni va pgbouncer statusini va konfiguratsiyalarini ko'rib chiqamiz.

etcd serverimizda etcd muvaffaqiyatli o'rnatilib konfiguratsiya qilingan va holati aktiv ishlamoqda. postgres-cluster

postgres-cluster

birorta node1,node2,node3 serverlarimizga kirib patroni orqali Postgresql clusterimiz statusini ko'ramiz.

postgres-cluster

Patroni statusini tekshiramiz.

postgres-cluster

Patroni konfiguratsiya(node3).

---
 
scope: postgres-cluster
name: pgnode03
namespace: /service
 
 
restapi:
  listen: 170.20.0.8:8008
  connect_address: 170.20.0.8:8008
#  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
#  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
#  authentication:
#    username: username
#    password: password
 
etcd3:
  hosts: 170.20.0.4:2379
 
bootstrap:
  method: initdb
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    master_start_timeout: 300
    synchronous_mode: false
    synchronous_mode_strict: false
    synchronous_node_count: 1
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        max_connections: 500
        superuser_reserved_connections: 5
        password_encryption: scram-sha-256
        max_locks_per_transaction: 512
        max_prepared_transactions: 0
        huge_pages: try
        shared_buffers: 1972MB
        effective_cache_size: 5916MB
        work_mem: 128MB
        maintenance_work_mem: 256MB
        checkpoint_timeout: 15min
        checkpoint_completion_target: 0.9
        min_wal_size: 2GB
        max_wal_size: 8GB
        wal_buffers: 32MB
        default_statistics_target: 1000
        seq_page_cost: 1
        random_page_cost: 1.1
        effective_io_concurrency: 200
        synchronous_commit: on
        autovacuum: on
        autovacuum_max_workers: 5
        autovacuum_vacuum_scale_factor: 0.01
        autovacuum_analyze_scale_factor: 0.01
        autovacuum_vacuum_cost_limit: 500
        autovacuum_vacuum_cost_delay: 2
        autovacuum_naptime: 1s
        max_files_per_process: 4096
        archive_mode: on
        archive_timeout: 1800s
        archive_command: cd .
        wal_level: replica
        wal_keep_size: 2GB
        max_wal_senders: 10
        max_replication_slots: 10
        hot_standby: on
        wal_log_hints: on
        wal_compression: on
        pg_stat_statements.max: 10000
        pg_stat_statements.track: all
        pg_stat_statements.track_utility: false
        pg_stat_statements.save: true
        auto_explain.log_min_duration: 10s
        auto_explain.log_analyze: true
        auto_explain.log_buffers: true
        auto_explain.log_timing: false
        auto_explain.log_triggers: true
        auto_explain.log_verbose: true
        auto_explain.log_nested_statements: true
        auto_explain.sample_rate: 0.01
        track_io_timing: on
        log_lock_waits: on
        log_temp_files: 0
        track_activities: on
        track_activity_query_size: 4096
        track_counts: on
        track_functions: all
        log_checkpoints: on
        logging_collector: on
        log_truncate_on_rotation: on
        log_rotation_age: 1d
        log_rotation_size: 0
        log_line_prefix: '%t [%p-%l] %r %q%u@%d '
        log_filename: postgresql-%a.log
        log_directory: /var/log/postgresql
        hot_standby_feedback: on
        max_standby_streaming_delay: 30s
        wal_receiver_status_interval: 10s
        idle_in_transaction_session_timeout: 10min
        jit: off
        max_worker_processes: 24
        max_parallel_workers: 8
        max_parallel_workers_per_gather: 2
        max_parallel_maintenance_workers: 2
        tcp_keepalives_count: 10
        tcp_keepalives_idle: 300
        tcp_keepalives_interval: 30
        shared_preload_libraries: pg_stat_statements,auto_explain,timescaledb
 
  initdb:  # List options to be passed on to initdb
    - encoding: UTF8
    - locale: en_US.UTF-8
    - data-checksums
 
  pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
    - host replication replicator 127.0.0.1/32 scram-sha-256
    - host all all 0.0.0.0/0 scram-sha-256
 
 
postgresql:
  listen: 170.20.0.8,127.0.0.1:5432
  connect_address: 170.20.0.8:5432
  use_unix_socket: true
  data_dir: /var/lib/postgresql/16/main
  bin_dir: /usr/lib/postgresql/16/bin
  config_dir: /etc/postgresql/16/main
  pgpass: /var/lib/postgresql/.pgpass_patroni
  authentication:
    replication:
      username: replicator
      password: replicator-parol-IIWfghweQHRW32
    superuser:
      username: postgres
      password: admin-parol-qvcnwepOYUI45Wds
#    rewind:  # Has no effect on postgres 10 and lower
#      username: rewind_user
#      password: rewind_password
  parameters:
    unix_socket_directories: /var/run/postgresql
 
 
  remove_data_directory_on_rewind_failure: false
  remove_data_directory_on_diverged_timelines: false
 
 
  create_replica_methods:
    - basebackup
  basebackup:
    max-rate: '100M'
    checkpoint: 'fast'
 
 
watchdog:
  mode: automatic  # Allowed values: off, automatic, required
  device: /dev/watchdog  # Path to the watchdog device
  safety_margin: 5
 
tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false
 
  # specify a node to replicate from (cascading replication)
#  replicatefrom: (node name)

Okey bizda patroni muvaffaqiyatli ishga tushirilib production uchun konfiguratsiya qilingan.

PGBouncer statusini tekshiramiz.

postgres-cluster

/etc/pgbouncer/pgbouncer.ini konfiguratsiya

[databases]
postgres = host=/var/run/postgresql port=5432 dbname=postgres 
 
* = host=/var/run/postgresql port=5432
 
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /run/pgbouncer/pgbouncer.pid
listen_addr = 0.0.0.0
listen_port = 6432
unix_socket_dir = /var/run/pgbouncer
auth_type = scram-sha-256
auth_user = pgbouncer
auth_dbname = postgres
auth_query = SELECT usename, passwd FROM user_search($1)
admin_users = postgres
stats_users = postgres
ignore_startup_parameters = extra_float_digits,geqo,search_path
 
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 10000
default_pool_size = 20
query_wait_timeout = 120
reserve_pool_size = 1
reserve_pool_timeout = 1
max_db_connections = 1000
pkt_buf = 8192
listen_backlog = 4096
max_prepared_statements = 1024
so_reuseport = 1
 
log_connections = 0
log_disconnections = 0
 
# Documentation https://pgbouncer.github.io/config.html

Okeey hammasi zo'r hammasi muvaffaqiyatli ishga tushdi va ishlab turibti endi master va replicalar uchun HAProxy load balancer sozlashimiz kerak.

HAProxy sozlash

HAProxy serverimizni sozlashni boshlaymiz.

1-> Tizimingizni yangilang.

sudo apt-get update && sudo apt-get upgrade -y

2-> HAProxy o'rnating. HAProxy ko'pgina Linux distributivlarining package management systemlariga kiritilgan:

Debian based uchun. Debian va Ubuntu uchun maxsus HAProxy versiylarini o'rnatish uchun quyidagi websaytdan foydalanishingiz mumkin. haproxy.debian.net (opens in a new tab)

sudo apt install haproxy -y

Ubuntu 20.04 uchun HAProxy 2.8 LTS

sudo apt-get install --no-install-recommends software-properties-common
sudo add-apt-repository ppa:vbernat/haproxy-2.8
sudo apt-get install haproxy=2.8.\*

Fedora uchun

sudo yum install haproxy

3-> HAProxy o'rnatilganidan keyin statusini tekshiring.

sudo systemctl status haproxy
sudo systemctl enable haproxy

4-> HAProxyni biz master va replicalar uchun roundrobin load balancing algoritmidan foydalanib konfiguratsiya qilamiz. PostgreSQL clustermizda ikkta postgres porti bo'ladi port: 5000 va port: 5001. 5000 port master bo'lib read/write uchun port: 5001 replica va read only uchun xizmat qiladi. Buning uchun applicationingiz ham read/write uchun 5000 read only uchunn 5001 portlarini ishlatish uchun moslab yozilgan bo'lishi kerak.

HAProxy konfiguratsiya fayli /etc/haproxy directoriyada haproxy.cfg fayli bo'lib hisoblanadi.

haproxy.cfg HAProxy konfiguratsiyamizni quyidagicha konfiguratsiya qilamiz.

/etc/haproxy/haproxy.cfg
global
    maxconn 100000
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
 
defaults
    mode               tcp
    log                global
    retries            2
    timeout queue      5s
    timeout connect    5s
    timeout client     60m
    timeout server     60m
    timeout check      15s
 
listen master
    bind 170.20.0.5:5000
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /primary
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 4 on-marked-down shutdown-sessions
 server pgnode01 170.20.0.6:6432 check port 8008
 server pgnode02 170.20.0.7:6432 check port 8008
 server pgnode03 170.20.0.8:6432 check port 8008
 
 
listen replicas
    bind 170.20.0.5:5001
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /replica?lag=100MB
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
 server pgnode01 170.20.0.6:6432 check port 8008
 server pgnode02 170.20.0.7:6432 check port 8008
 server pgnode03 170.20.0.8:6432 check port 8008
 
 
listen replicas_sync
    bind 170.20.0.5:5002
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /sync
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
 server pgnode01 170.20.0.6:6432 check port 8008
 server pgnode02 170.20.0.7:6432 check port 8008
 server pgnode03 170.20.0.8:6432 check port 8008
 
 
listen replicas_async
    bind 170.20.0.5:5003
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /async?lag=100MB
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
 server pgnode01 170.20.0.6:6432 check port 8008
 server pgnode02 170.20.0.7:6432 check port 8008
 server pgnode03 170.20.0.8:6432 check port 8008

Konfiguratsiyani yangilab haproxyga restart beramiz.

sudo systemctl restart haproxy
sudo systemctl status haproxy

HAProxy load balancerimiz 5000 va 5001 portlari firewaldan ochiq bo'lishi kerak. Keling hammasini tekshirib ko'rish uchun ushbu postgresql clusterga ulanib ko'ramiz.

Ulanish uchun pgadmin yoki birorta bazalarga ulanadigan dastur yoki hech bo'lmasa serverda psql orqali. Load balancer serverimiz static IP manzili, postgres user va read/write uchun 5000 port orqali ulanib ko'ramiz.

psql -h 172.214.40.99 -p 5000 -U postgres;

postgres parol postgresql_cluster/vars/main.yml konfiguratsiya faylida belgilagan admin-parol-qvcnwepOYUI45Wds parolimizdir. postgres-cluster

postgres-cluster

Okey bizda hmmasi zo'r hamma ish bitti biz hammasini muvaffaqiyatli yakunladik.

Test

Bu amaliyotda biz high availability PostgreSQL Cluster asosini yaratib sozladik. Bunda keyingi bosqichlarni yani backuplar bilan ishlash load balancerlarni, etcd serverlar va nodelarni ko'paytirish o'zingizga amalga oshirasiz. Yaratgan postgresql clusterimizni test qilib ko'ramiz, hozir birorta node serverni o'chiramiz postgresql clusterimiz buzilib yiqilib qolmasligi kerak va o'chirgan nodemiznni qayta yoqamiz server yonganidan o'chgan node avtomatik clusterga qo'shilib muammosiz ishlab ketishi kerak keyin leader(master) serverni o'chiramiz cluster avtomatik boshqa leader tanlashi va ishlashdan to'xtamasligi kerak.

Testlardan muvaffaqiyatli o'tganidan keyin yangi node qo'shishni ko'rib chiqamiz. Yangilanishlar qo'llash oson Ansible playbookga yangilashlarni kiritib playbookni ishga tushirsak yangilangan konfiguratsiya bilan ishlaydi.

1-> Bizda hozir status quyidagicha. 2-ta (pgnode02,pgnode02) Replica va bitta pgnode01 Leader hisoblanadi. Keling pgnode03 node serverimizni o'chirib statusini tekshiramiz.

patronictl -c /etc/patroni/patroni.yml list

postgres-cluster

pgnode03 serverimizni o'chirgandagi status.

postgres-cluster

pgnode03 sevverimizni qayta yoqamiz postgres clusterga avtomatik muamosiz ulanishi kerak buni o'zingiz tekshirib olasiz.

Keling endi pgnode01 Leader serverni o'chiramiz postgres clustermiz avtomatik Leader tanlashi kerak.

postgres-cluster

Ko'rib turganimizdek postgres clusterimizda oldin pgnode01 Leader edi u server o'chganidan keyin pgnode03 ga Leaderlik berildi bizda hech qanday uzulish kuzatilmasligi kerak. Qolgan qo'shimcha testlarni o'zinmgiz o'tkizib olasiz.

Yangi node qo'shish uchun inventory konfiguratsiya faylimizga yangi node internal IP ni yozib ansible server bilan yangi node serverni ulaymiz va quyidagi scriptni ishga tushirish orqali yangi node serverni qo'shib olamiz.

ansible-playbook add_pgnode.yml

Shu qismda biz ushbu amaliyotni yakunlaymiz o'ylaymizni sizda hammasi muvaffaqiyatli bo'ldi.

Qo'shimcha