[DO-1431] Final config PROD k8s (!3)
DO-1431 Co-authored-by: denis.patrakeev <denis.patrakeev@avroid.tech> Reviewed-on: https://git.avroid.tech/K8s/k8s-deploy/pulls/3
This commit is contained in:
119
env/avroid_prod/k8s-avroid-office.prod.local/README.md
vendored
Normal file
119
env/avroid_prod/k8s-avroid-office.prod.local/README.md
vendored
Normal file
@@ -0,0 +1,119 @@
|
||||
## Требования
|
||||
|
||||
[Requirements](./kubespray/README.md#requirements)
|
||||
|
||||
## Версия Kuberspray и Kubernetes у текущих инвентору
|
||||
| Kuberspray | v2.26.0 |
|
||||
|------------|---------|
|
||||
| Kubernetes | v1.30.4 |
|
||||
|
||||
## Особенности развертывания кластера
|
||||
| Модуль | Комментарий |
|
||||
|--------------------------|------------------------------------------------------------------------------------------|
|
||||
| Cluster name | k8s-avroid-office.prod.local |
|
||||
| Сеть | Только IPv4 |
|
||||
| Сеть | 172.24.0.0/18 - подсеть сервисов |
|
||||
| Сеть | 172.24.64.0/18 - подсеть подов |
|
||||
| Сеть | 30000-32767 - список портов, разрешённый к форвардингу на нодах |
|
||||
| Маска подсети на ноду | 24 (Итого - max 254 подов на ноде и max 64 ноды) |
|
||||
| CNI | calico |
|
||||
| NTP-клиенты | Настроены на локальные приватные NTP-сервера и московскую таймзону |
|
||||
| DNS zone | k8s-avroid-office.prod.local |
|
||||
| DNS | Dual CoreDNS + nodelocaldns |
|
||||
| Etcd | данные сервиса в /data/etcd на отдельном блочном устройстве с ext4) |
|
||||
| Container runtime | containerd (/var/lib/containerd на отдельном блочном устройстве с XFS) |
|
||||
| Приватный реестр образов | Используются приватные кеширующие зеркала с harbor.avroid.tech в настройках containerd |
|
||||
| Диски | Все ноды: /var/lib/containerd вынесен на отдельные блочное устройства с XFS |
|
||||
| Диски | k8s-control-0X: /data вынесен на отдельные блочное устройства с ext4 |
|
||||
| Диски | k8s-worker/build-0X: /var/lib/kubelet/pods вынесен на отдельные блочное устройства с XFS |
|
||||
| HA | API Server |
|
||||
| Ingress | Nginx ingress controller 80 --> 30080 (k8s-worker-0X), 443 --> 30081 (k8s-worker-0X) |
|
||||
| Ingress | Работает только на нодах с кастомной меткой `node-role.kubernetes.io/ingress-nginx:true` |
|
||||
| Дополнительные сервисы | Helm, Metrics Server, Cert manager, netchecker |
|
||||
|
||||
|
||||
## Доступ до развёрнутых сервисов
|
||||
### Ingress NGINX Controller
|
||||
https://github.com/kubernetes/ingress-nginx/blob/main/README.md#readme
|
||||
|
||||
<worker_node>:30080/TCP --> nginx:80/TCP
|
||||
|
||||
<worker_node>:30081/TCP --> nginx:443/TCP
|
||||
|
||||
### netchecker
|
||||
https://github.com/Mirantis/k8s-netchecker-server
|
||||
|
||||
http://<IP_АДРЕС_НОДЫ>:31081/api/v1/agents/
|
||||
|
||||
http://<IP_АДРЕС_НОДЫ>:31081/api/v1/connectivity_check
|
||||
|
||||
http://<IP_АДРЕС_НОДЫ>:31081/metrics
|
||||
|
||||
|
||||
## Подготовка окружения для развёртывания и развёртывание
|
||||
|
||||
### 1. Предварительная подготовка ВМ
|
||||
|
||||
Предварительно готовим ВМ основным набором спритов Ansible со следующими особенностями:
|
||||
- настраиваем дополнительные диски и точки монтирования
|
||||
- настраиваем авторизацию доменную
|
||||
- отключаем настройку NTP
|
||||
- отключаем настройку node_exporter
|
||||
|
||||
### 2. Обновляем подмодуль с Kubespray и проверяем что он стоит на необходимом тэге
|
||||
```bash
|
||||
cd env/<ОКРУЖЕНИЕ_XX>/<КЛАСТЕР_XX>
|
||||
git submodule update --init --recursive
|
||||
cd kubespray
|
||||
git status
|
||||
cd ../../..
|
||||
```
|
||||
|
||||
### 3. Готовим окружение Ansible
|
||||
[Kubespray docs: Ansible Python Compatibility](./kubespray/docs/ansible/ansible.md#ansible-python-compatibility)
|
||||
|
||||
| Ansible Version | Python Version |
|
||||
|-----------------|----------------|
|
||||
| 2.11 | 2.7,3.5-3.9 |
|
||||
| 2.12 | 3.8-3.10 |
|
||||
| >=2.16.4 | 3.10-3.12 |
|
||||
|
||||
```bash
|
||||
cd env/<ОКРУЖЕНИЕ_XX>/<КЛАСТЕР_XX>
|
||||
export VENVDIR=kubespray-venv
|
||||
export KUBESPRAYDIR=kubespray
|
||||
python3 -m venv ./$VENVDIR
|
||||
source $VENVDIR/bin/activate
|
||||
pip3 install -U -r $KUBESPRAYDIR/requirements.txt
|
||||
```
|
||||
|
||||
### 4. Запускаем раскатку кластера
|
||||
```bash
|
||||
cd env/<ОКРУЖЕНИЕ_XX>/<КЛАСТЕР_XX>
|
||||
export VENVDIR=kubespray-venv
|
||||
export KUBESPRAYDIR=kubespray
|
||||
source $VENVDIR/bin/activate
|
||||
cd $KUBESPRAYDIR
|
||||
ansible-playbook cluster.yml -i ../inventory/inventory.ini -bkK -v
|
||||
```
|
||||
|
||||
### 5. Копируем конфиг для подключения к кластеру через kubectl
|
||||
Копируем с любой из master-нод конфиг:
|
||||
```text
|
||||
/etc/kubernetes/admin.conf
|
||||
```
|
||||
К себе на машину и правим, в опции `server` указываем вместо `127.0.0.1` внешний адрес мастер-ноды.
|
||||
|
||||
Затем настраиваем любым удобным способом работу с кластером через него:
|
||||
[Kubespray docs: Access the kubernetes cluster](./kubespray/docs/getting_started/setting-up-your-first-cluster.md#access-the-kubernetes-cluster)
|
||||
|
||||
|
||||
## Дополнительные действия с кластером через Kubespray
|
||||
Дополнительные теги:
|
||||
[Kubespray docs: Ansible tags](./kubespray/docs/ansible/ansible.md#ansible-tags)
|
||||
|
||||
Добавление/удаление нод:
|
||||
[Kubespray docs: Adding/replacing a node](./kubespray/docs/operations/nodes.md)
|
||||
|
||||
Обновление кластера:
|
||||
[Kubespray docs: Upgrading Kubernetes in Kubespray](./kubespray/docs/operations/upgrades.md)
|
||||
0
env/avroid_prod/k8s-avroid-office.prod.local/cluster_manifests/.gitkeep
vendored
Normal file
0
env/avroid_prod/k8s-avroid-office.prod.local/cluster_manifests/.gitkeep
vendored
Normal file
@@ -0,0 +1 @@
|
||||
bcdF76b6B3F3cBE15afe5eea979e9c8056dFBF5c14ce9e71eC414413bDfCA0DA
|
||||
141
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/all/all.yml
vendored
Normal file
141
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/all/all.yml
vendored
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
## Directory where the binaries will be installed
|
||||
bin_dir: /usr/local/bin
|
||||
|
||||
## The access_ip variable is used to define how other nodes should access
|
||||
## the node. This is used in flannel to allow other flannel nodes to see
|
||||
## this node for example. The access_ip is really useful AWS and Google
|
||||
## environments where the nodes are accessed remotely by the "public" ip,
|
||||
## but don't know about that address themselves.
|
||||
# access_ip: 1.1.1.1
|
||||
|
||||
|
||||
## External LB example config
|
||||
## apiserver_loadbalancer_domain_name: "elb.some.domain"
|
||||
# loadbalancer_apiserver:
|
||||
# address: 1.2.3.4
|
||||
# port: 1234
|
||||
|
||||
## Internal loadbalancers for apiservers
|
||||
loadbalancer_apiserver_localhost: true
|
||||
# valid options are "nginx" or "haproxy"
|
||||
loadbalancer_apiserver_type: nginx # valid values "nginx" or "haproxy"
|
||||
|
||||
## Local loadbalancer should use this port
|
||||
## And must be set port 6443
|
||||
loadbalancer_apiserver_port: 6443
|
||||
|
||||
## If loadbalancer_apiserver_healthcheck_port variable defined, enables proxy liveness check for nginx.
|
||||
loadbalancer_apiserver_healthcheck_port: 8081
|
||||
|
||||
### OTHER OPTIONAL VARIABLES
|
||||
|
||||
## By default, Kubespray collects nameservers on the host. It then adds the previously collected nameservers in nameserverentries.
|
||||
## If true, Kubespray does not include host nameservers in nameserverentries in dns_late stage. However, It uses the nameserver to make sure cluster installed safely in dns_early stage.
|
||||
## Use this option with caution, you may need to define your dns servers. Otherwise, the outbound queries such as www.google.com may fail.
|
||||
# disable_host_nameservers: false
|
||||
|
||||
## Upstream dns servers
|
||||
upstream_dns_servers:
|
||||
- 10.2.4.10
|
||||
- 10.2.4.20
|
||||
- 10.3.0.101
|
||||
|
||||
## There are some changes specific to the cloud providers
|
||||
## for instance we need to encapsulate packets with some network plugins
|
||||
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
|
||||
## When openstack is used make sure to source in the openstack credentials
|
||||
## like you would do when using openstack-client before starting the playbook.
|
||||
# cloud_provider:
|
||||
|
||||
## When cloud_provider is set to 'external', you can set the cloud controller to deploy
|
||||
## Supported cloud controllers are: 'openstack', 'vsphere', 'huaweicloud' and 'hcloud'
|
||||
## When openstack or vsphere are used make sure to source in the required fields
|
||||
# external_cloud_provider:
|
||||
|
||||
## Set these proxy values in order to update package manager and docker daemon to use proxies and custom CA for https_proxy if needed
|
||||
# http_proxy: ""
|
||||
# https_proxy: ""
|
||||
# https_proxy_cert_file: ""
|
||||
|
||||
## Refer to roles/kubespray-defaults/defaults/main/main.yml before modifying no_proxy
|
||||
# no_proxy: ""
|
||||
|
||||
## Some problems may occur when downloading files over https proxy due to ansible bug
|
||||
## https://github.com/ansible/ansible/issues/32750. Set this variable to False to disable
|
||||
## SSL validation of get_url module. Note that kubespray will still be performing checksum validation.
|
||||
# download_validate_certs: False
|
||||
|
||||
## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
|
||||
# additional_no_proxy: ""
|
||||
|
||||
## If you need to disable proxying of os package repositories but are still behind an http_proxy set
|
||||
## skip_http_proxy_on_os_packages to true
|
||||
## This will cause kubespray not to set proxy environment in /etc/yum.conf for centos and in /etc/apt/apt.conf for debian/ubuntu
|
||||
## Special information for debian/ubuntu - you have to set the no_proxy variable, then apt package will install from your source of wish
|
||||
# skip_http_proxy_on_os_packages: false
|
||||
|
||||
## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all
|
||||
## pods will restart) when adding or removing workers. To override this behaviour by only including master nodes in the
|
||||
## no_proxy variable, set below to true:
|
||||
no_proxy_exclude_workers: false
|
||||
|
||||
## Certificate Management
|
||||
## This setting determines whether certs are generated via scripts.
|
||||
## Chose 'none' if you provide your own certificates.
|
||||
## Option is "script", "none"
|
||||
cert_management: script
|
||||
|
||||
## Set to true to allow pre-checks to fail and continue deployment
|
||||
# ignore_assert_errors: false
|
||||
|
||||
## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
|
||||
# kube_read_only_port: 10255
|
||||
|
||||
## Set true to download and cache container
|
||||
download_container: true
|
||||
|
||||
## Deploy container engine
|
||||
# Set false if you want to deploy container engine manually.
|
||||
# deploy_container_engine: true
|
||||
|
||||
## Red Hat Enterprise Linux subscription registration
|
||||
## Add either RHEL subscription Username/Password or Organization ID/Activation Key combination
|
||||
## Update RHEL subscription purpose usage, role and SLA if necessary
|
||||
# rh_subscription_username: ""
|
||||
# rh_subscription_password: ""
|
||||
# rh_subscription_org_id: ""
|
||||
# rh_subscription_activation_key: ""
|
||||
# rh_subscription_usage: "Development"
|
||||
# rh_subscription_role: "Red Hat Enterprise Server"
|
||||
# rh_subscription_sla: "Self-Support"
|
||||
|
||||
## Check if access_ip responds to ping. Set false if your firewall blocks ICMP.
|
||||
# ping_access_ip: true
|
||||
|
||||
# sysctl_file_path to add sysctl conf to
|
||||
# sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf"
|
||||
|
||||
## Variables for webhook token auth https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
|
||||
kube_webhook_token_auth: false
|
||||
kube_webhook_token_auth_url_skip_tls_verify: false
|
||||
# kube_webhook_token_auth_url: https://...
|
||||
## base64-encoded string of the webhook's CA certificate
|
||||
# kube_webhook_token_auth_ca_data: "LS0t..."
|
||||
|
||||
## NTP Settings
|
||||
# Start the ntpd or chrony service and enable it at system boot.
|
||||
ntp_enabled: true
|
||||
ntp_manage_config: true
|
||||
ntp_servers:
|
||||
- "ntp-01.avroid.tech iburst"
|
||||
- "ntp-02.avroid.tech iburst"
|
||||
- "ntp-03.avroid.tech iburst"
|
||||
# Set timezone
|
||||
ntp_timezone: Europe/Moscow
|
||||
|
||||
## Used to control no_log attribute
|
||||
unsafe_show_logs: false
|
||||
|
||||
## If enabled it will allow kubespray to attempt setup even if the distribution is not supported. For unsupported distributions this can lead to unexpected failures in some cases.
|
||||
allow_unsupported_distribution_setup: false
|
||||
89
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/all/containerd.yml
vendored
Normal file
89
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/all/containerd.yml
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options
|
||||
|
||||
containerd_storage_dir: "/var/lib/containerd"
|
||||
# containerd_state_dir: "/run/containerd"
|
||||
# containerd_oom_score: 0
|
||||
|
||||
# containerd_default_runtime: "runc"
|
||||
# containerd_snapshotter: "native"
|
||||
|
||||
# containerd_runc_runtime:
|
||||
# name: runc
|
||||
# type: "io.containerd.runc.v2"
|
||||
# engine: ""
|
||||
# root: ""
|
||||
|
||||
# containerd_additional_runtimes:
|
||||
# Example for Kata Containers as additional runtime:
|
||||
# - name: kata
|
||||
# type: "io.containerd.kata.v2"
|
||||
# engine: ""
|
||||
# root: ""
|
||||
|
||||
# containerd_grpc_max_recv_message_size: 16777216
|
||||
# containerd_grpc_max_send_message_size: 16777216
|
||||
|
||||
# Containerd debug socket location: unix or tcp format
|
||||
# containerd_debug_address: ""
|
||||
|
||||
# Containerd log level
|
||||
# containerd_debug_level: "info"
|
||||
|
||||
# Containerd logs format, supported values: text, json
|
||||
# containerd_debug_format: ""
|
||||
|
||||
# Containerd debug socket UID
|
||||
# containerd_debug_uid: 0
|
||||
|
||||
# Containerd debug socket GID
|
||||
# containerd_debug_gid: 0
|
||||
|
||||
# containerd_metrics_address: ""
|
||||
|
||||
# containerd_metrics_grpc_histogram: false
|
||||
|
||||
# Registries defined within containerd.
|
||||
containerd_registries_mirrors:
|
||||
- prefix: docker.io
|
||||
mirrors:
|
||||
- host: https://mirror-gcr-io-proxy.avroid.tech
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
- host: https://eu-central-1-mirror-aliyuncs-com-proxy.avroid.tech
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
- host: https://registry-1.docker.io
|
||||
capabilities: ["pull", "resolve"]
|
||||
skip_verify: false
|
||||
- prefix: quay.io
|
||||
mirrors:
|
||||
- host: https://quay-proxy.avroid.tech
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
- host: https://quay.io
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
- prefix: ghcr.io
|
||||
mirrors:
|
||||
- host: https://ghcr-proxy.avroid.tech
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
- host: https://ghcr.io
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
- prefix: registry.k8s.io
|
||||
mirrors:
|
||||
- host: https://registry-k8s-io-proxy.avroid.tech
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
- host: https://registry.k8s.io
|
||||
capabilities: [ "pull", "resolve" ]
|
||||
skip_verify: false
|
||||
|
||||
# containerd_max_container_log_line_size: -1
|
||||
|
||||
# containerd_registry_auth:
|
||||
# - registry: 10.0.0.2:5000
|
||||
# username: user
|
||||
# password: pass
|
||||
16
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/all/etcd.yml
vendored
Normal file
16
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/all/etcd.yml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
## Directory where etcd data stored
|
||||
etcd_data_dir: /data/etcd
|
||||
|
||||
## Container runtime
|
||||
## docker for docker, crio for cri-o and containerd for containerd.
|
||||
## Additionally you can set this to kubeadm if you want to install etcd using kubeadm
|
||||
## Kubeadm etcd deployment is experimental and only available for new deployments
|
||||
## If this is not set, container manager will be inherited from the Kubespray defaults
|
||||
## and not from k8s_cluster/k8s-cluster.yml, which might not be what you want.
|
||||
## Also this makes possible to use different container manager for etcd nodes.
|
||||
# container_manager: containerd
|
||||
|
||||
## Settings for etcd deployment type
|
||||
# Set this to docker if you are using container_manager: docker
|
||||
etcd_deployment_type: host
|
||||
@@ -0,0 +1,3 @@
|
||||
---
|
||||
node_labels:
|
||||
node-role.kubernetes.io/ingress-nginx: "true"
|
||||
284
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/k8s_cluster/addons.yml
vendored
Normal file
284
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/k8s_cluster/addons.yml
vendored
Normal file
@@ -0,0 +1,284 @@
|
||||
---
|
||||
# Kubernetes dashboard
|
||||
# RBAC required. see docs/getting-started.md for access details.
|
||||
# dashboard_enabled: false
|
||||
|
||||
# Helm deployment
|
||||
helm_enabled: true
|
||||
|
||||
# Registry deployment
|
||||
registry_enabled: false
|
||||
# registry_namespace: kube-system
|
||||
# registry_storage_class: ""
|
||||
# registry_disk_size: "10Gi"
|
||||
|
||||
# Metrics Server deployment
|
||||
metrics_server_enabled: true
|
||||
metrics_server_container_port: 10250
|
||||
metrics_server_kubelet_insecure_tls: true
|
||||
metrics_server_metric_resolution: 15s
|
||||
metrics_server_kubelet_preferred_address_types: "InternalIP,ExternalIP,Hostname"
|
||||
metrics_server_host_network: false
|
||||
metrics_server_replicas: 1
|
||||
|
||||
# Rancher Local Path Provisioner
|
||||
local_path_provisioner_enabled: false
|
||||
# local_path_provisioner_namespace: "local-path-storage"
|
||||
# local_path_provisioner_storage_class: "local-path"
|
||||
# local_path_provisioner_reclaim_policy: Delete
|
||||
# local_path_provisioner_claim_root: /opt/local-path-provisioner/
|
||||
# local_path_provisioner_debug: false
|
||||
# local_path_provisioner_image_repo: "{{ docker_image_repo }}/rancher/local-path-provisioner"
|
||||
# local_path_provisioner_image_tag: "v0.0.24"
|
||||
# local_path_provisioner_helper_image_repo: "busybox"
|
||||
# local_path_provisioner_helper_image_tag: "latest"
|
||||
|
||||
# Local volume provisioner deployment
|
||||
local_volume_provisioner_enabled: false
|
||||
# local_volume_provisioner_namespace: kube-system
|
||||
# local_volume_provisioner_nodelabels:
|
||||
# - kubernetes.io/hostname
|
||||
# - topology.kubernetes.io/region
|
||||
# - topology.kubernetes.io/zone
|
||||
# local_volume_provisioner_storage_classes:
|
||||
# local-storage:
|
||||
# host_dir: /mnt/disks
|
||||
# mount_dir: /mnt/disks
|
||||
# volume_mode: Filesystem
|
||||
# fs_type: ext4
|
||||
# fast-disks:
|
||||
# host_dir: /mnt/fast-disks
|
||||
# mount_dir: /mnt/fast-disks
|
||||
# block_cleaner_command:
|
||||
# - "/scripts/shred.sh"
|
||||
# - "2"
|
||||
# volume_mode: Filesystem
|
||||
# fs_type: ext4
|
||||
# local_volume_provisioner_tolerations:
|
||||
# - effect: NoSchedule
|
||||
# operator: Exists
|
||||
|
||||
# CSI Volume Snapshot Controller deployment, set this to true if your CSI is able to manage snapshots
|
||||
# currently, setting cinder_csi_enabled=true would automatically enable the snapshot controller
|
||||
# Longhorn is an external CSI that would also require setting this to true but it is not included in kubespray
|
||||
# csi_snapshot_controller_enabled: false
|
||||
# csi snapshot namespace
|
||||
# snapshot_controller_namespace: kube-system
|
||||
|
||||
# CephFS provisioner deployment
|
||||
cephfs_provisioner_enabled: false
|
||||
# cephfs_provisioner_namespace: "cephfs-provisioner"
|
||||
# cephfs_provisioner_cluster: ceph
|
||||
# cephfs_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
|
||||
# cephfs_provisioner_admin_id: admin
|
||||
# cephfs_provisioner_secret: secret
|
||||
# cephfs_provisioner_storage_class: cephfs
|
||||
# cephfs_provisioner_reclaim_policy: Delete
|
||||
# cephfs_provisioner_claim_root: /volumes
|
||||
# cephfs_provisioner_deterministic_names: true
|
||||
|
||||
# RBD provisioner deployment
|
||||
rbd_provisioner_enabled: false
|
||||
# rbd_provisioner_namespace: rbd-provisioner
|
||||
# rbd_provisioner_replicas: 2
|
||||
# rbd_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
|
||||
# rbd_provisioner_pool: kube
|
||||
# rbd_provisioner_admin_id: admin
|
||||
# rbd_provisioner_secret_name: ceph-secret-admin
|
||||
# rbd_provisioner_secret: ceph-key-admin
|
||||
# rbd_provisioner_user_id: kube
|
||||
# rbd_provisioner_user_secret_name: ceph-secret-user
|
||||
# rbd_provisioner_user_secret: ceph-key-user
|
||||
# rbd_provisioner_user_secret_namespace: rbd-provisioner
|
||||
# rbd_provisioner_fs_type: ext4
|
||||
# rbd_provisioner_image_format: "2"
|
||||
# rbd_provisioner_image_features: layering
|
||||
# rbd_provisioner_storage_class: rbd
|
||||
# rbd_provisioner_reclaim_policy: Delete
|
||||
|
||||
# Gateway API CRDs
|
||||
gateway_api_enabled: false
|
||||
# gateway_api_experimental_channel: false
|
||||
|
||||
# Nginx ingress controller deployment
|
||||
ingress_nginx_enabled: true
|
||||
ingress_nginx_host_network: false
|
||||
ingress_nginx_service_type: NodePort
|
||||
ingress_nginx_service_nodeport_http: 30080
|
||||
ingress_nginx_service_nodeport_https: 30081
|
||||
ingress_publish_status_address: ""
|
||||
ingress_nginx_nodeselector:
|
||||
node-role.kubernetes.io/ingress-nginx: "true"
|
||||
ingress_nginx_tolerations:
|
||||
- key: "node-role.kubernetes.io/control-node"
|
||||
operator: "Equal"
|
||||
value: ""
|
||||
effect: "NoSchedule"
|
||||
ingress_nginx_namespace: "ingress-nginx"
|
||||
ingress_nginx_insecure_port: 80
|
||||
ingress_nginx_secure_port: 443
|
||||
ingress_nginx_configmap:
|
||||
map-hash-bucket-size: "128"
|
||||
ssl-protocols: "TLSv1.2 TLSv1.3"
|
||||
client-body-buffer-size: "50m"
|
||||
proxy-body-size: "100m"
|
||||
client-header-buffer-size: "2k"
|
||||
# ingress_nginx_configmap_tcp_services:
|
||||
# 9000: "default/example-go:8080"
|
||||
# ingress_nginx_configmap_udp_services:
|
||||
# 53: "kube-system/coredns:53"
|
||||
# ingress_nginx_extra_args:
|
||||
# - --default-ssl-certificate=default/foo-tls
|
||||
ingress_nginx_termination_grace_period_seconds: 300
|
||||
ingress_nginx_class: nginx
|
||||
# ingress_nginx_without_class: true
|
||||
# ingress_nginx_default: false
|
||||
|
||||
# ALB ingress controller deployment
|
||||
ingress_alb_enabled: false
|
||||
# alb_ingress_aws_region: "us-east-1"
|
||||
# alb_ingress_restrict_scheme: "false"
|
||||
# Enables logging on all outbound requests sent to the AWS API.
|
||||
# If logging is desired, set to true.
|
||||
# alb_ingress_aws_debug: "false"
|
||||
|
||||
# Cert manager deployment
|
||||
cert_manager_enabled: true
|
||||
cert_manager_namespace: "cert-manager"
|
||||
cert_manager_tolerations:
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
effect: NoSchedule
|
||||
cert_manager_affinity:
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: In
|
||||
values:
|
||||
- ""
|
||||
cert_manager_nodeselector:
|
||||
kubernetes.io/os: "linux"
|
||||
|
||||
# cert_manager_trusted_internal_ca: |
|
||||
# -----BEGIN CERTIFICATE-----
|
||||
# [REPLACE with your CA certificate]
|
||||
# -----END CERTIFICATE-----
|
||||
# cert_manager_leader_election_namespace: kube-system
|
||||
|
||||
# cert_manager_dns_policy: "ClusterFirst"
|
||||
# cert_manager_dns_config:
|
||||
# nameservers:
|
||||
# - "1.1.1.1"
|
||||
# - "8.8.8.8"
|
||||
|
||||
# cert_manager_controller_extra_args:
|
||||
# - "--dns01-recursive-nameservers-only=true"
|
||||
# - "--dns01-recursive-nameservers=1.1.1.1:53,8.8.8.8:53"
|
||||
|
||||
# MetalLB deployment
|
||||
metallb_enabled: false
|
||||
metallb_speaker_enabled: "{{ metallb_enabled }}"
|
||||
metallb_namespace: "metallb-system"
|
||||
# metallb_version: v0.13.9
|
||||
# metallb_protocol: "layer2"
|
||||
# metallb_port: "7472"
|
||||
# metallb_memberlist_port: "7946"
|
||||
# metallb_config:
|
||||
# speaker:
|
||||
# nodeselector:
|
||||
# kubernetes.io/os: "linux"
|
||||
# tolerations:
|
||||
# - key: "node-role.kubernetes.io/control-plane"
|
||||
# operator: "Equal"
|
||||
# value: ""
|
||||
# effect: "NoSchedule"
|
||||
# controller:
|
||||
# nodeselector:
|
||||
# kubernetes.io/os: "linux"
|
||||
# tolerations:
|
||||
# - key: "node-role.kubernetes.io/control-plane"
|
||||
# operator: "Equal"
|
||||
# value: ""
|
||||
# effect: "NoSchedule"
|
||||
# address_pools:
|
||||
# primary:
|
||||
# ip_range:
|
||||
# - 10.5.0.0/16
|
||||
# auto_assign: true
|
||||
# pool1:
|
||||
# ip_range:
|
||||
# - 10.6.0.0/16
|
||||
# auto_assign: true
|
||||
# pool2:
|
||||
# ip_range:
|
||||
# - 10.10.0.0/16
|
||||
# auto_assign: true
|
||||
# layer2:
|
||||
# - primary
|
||||
# layer3:
|
||||
# defaults:
|
||||
# peer_port: 179
|
||||
# hold_time: 120s
|
||||
# communities:
|
||||
# vpn-only: "1234:1"
|
||||
# NO_ADVERTISE: "65535:65282"
|
||||
# metallb_peers:
|
||||
# peer1:
|
||||
# peer_address: 10.6.0.1
|
||||
# peer_asn: 64512
|
||||
# my_asn: 4200000000
|
||||
# communities:
|
||||
# - vpn-only
|
||||
# address_pool:
|
||||
# - pool1
|
||||
# peer2:
|
||||
# peer_address: 10.10.0.1
|
||||
# peer_asn: 64513
|
||||
# my_asn: 4200000000
|
||||
# communities:
|
||||
# - NO_ADVERTISE
|
||||
# address_pool:
|
||||
# - pool2
|
||||
|
||||
argocd_enabled: false
|
||||
# argocd_version: v2.11.0
|
||||
# argocd_namespace: argocd
|
||||
# Default password:
|
||||
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
|
||||
# ---
|
||||
# The initial password is autogenerated and stored in `argocd-initial-admin-secret` in the argocd namespace defined above.
|
||||
# Using the argocd CLI the generated password can be automatically be fetched from the current kubectl context with the command:
|
||||
# argocd admin initial-password -n argocd
|
||||
# ---
|
||||
# Use the following var to set admin password
|
||||
# argocd_admin_password: "password"
|
||||
|
||||
# The plugin manager for kubectl
|
||||
krew_enabled: true
|
||||
krew_root_dir: "/usr/local/krew"
|
||||
|
||||
# Kube VIP
|
||||
kube_vip_enabled: false
|
||||
# kube_vip_arp_enabled: true
|
||||
# kube_vip_controlplane_enabled: true
|
||||
# kube_vip_address: 192.168.56.120
|
||||
# loadbalancer_apiserver:
|
||||
# address: "{{ kube_vip_address }}"
|
||||
# port: 6443
|
||||
# kube_vip_interface: eth0
|
||||
# kube_vip_services_enabled: false
|
||||
# kube_vip_dns_mode: first
|
||||
# kube_vip_cp_detect: false
|
||||
# kube_vip_leasename: plndr-cp-lock
|
||||
# kube_vip_enable_node_labeling: false
|
||||
|
||||
# Node Feature Discovery
|
||||
node_feature_discovery_enabled: false
|
||||
# node_feature_discovery_gc_sa_name: node-feature-discovery
|
||||
# node_feature_discovery_gc_sa_create: false
|
||||
# node_feature_discovery_worker_sa_name: node-feature-discovery
|
||||
# node_feature_discovery_worker_sa_create: false
|
||||
# node_feature_discovery_master_config:
|
||||
# extraLabelNs: ["nvidia.com"]
|
||||
381
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/k8s_cluster/k8s-cluster.yml
vendored
Normal file
381
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/k8s_cluster/k8s-cluster.yml
vendored
Normal file
@@ -0,0 +1,381 @@
|
||||
---
|
||||
# Kubernetes configuration dirs and system namespace.
|
||||
# Those are where all the additional config stuff goes
|
||||
# the kubernetes normally puts in /srv/kubernetes.
|
||||
# This puts them in a sane location and namespace.
|
||||
# Editing those values will almost surely break something.
|
||||
kube_config_dir: /etc/kubernetes
|
||||
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
|
||||
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
|
||||
|
||||
# This is where all the cert scripts and certs will be located
|
||||
kube_cert_dir: "{{ kube_config_dir }}/ssl"
|
||||
|
||||
# This is where all of the bearer tokens will be stored
|
||||
kube_token_dir: "{{ kube_config_dir }}/tokens"
|
||||
|
||||
kube_api_anonymous_auth: true
|
||||
|
||||
## Change this to use another Kubernetes version, e.g. a current beta release
|
||||
kube_version: v1.30.4
|
||||
|
||||
# Where the binaries will be downloaded.
|
||||
# Note: ensure that you've enough disk space (about 1G)
|
||||
local_release_dir: "/tmp/releases"
|
||||
# Random shifts for retrying failed ops like pushing/downloading
|
||||
retry_stagger: 5
|
||||
|
||||
# This is the user that owns tha cluster installation.
|
||||
kube_owner: kube
|
||||
|
||||
# This is the group that the cert creation scripts chgrp the
|
||||
# cert files to. Not really changeable...
|
||||
kube_cert_group: kube-cert
|
||||
|
||||
# Cluster Loglevel configuration
|
||||
kube_log_level: 2
|
||||
|
||||
# Directory where credentials will be stored
|
||||
credentials_dir: "{{ inventory_dir }}/credentials"
|
||||
|
||||
## It is possible to activate / deactivate selected authentication methods (oidc, static token auth)
|
||||
# kube_oidc_auth: false
|
||||
# kube_token_auth: false
|
||||
|
||||
|
||||
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
|
||||
## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...)
|
||||
|
||||
# kube_oidc_url: https:// ...
|
||||
# kube_oidc_client_id: kubernetes
|
||||
## Optional settings for OIDC
|
||||
# kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
|
||||
# kube_oidc_username_claim: sub
|
||||
# kube_oidc_username_prefix: 'oidc:'
|
||||
# kube_oidc_groups_claim: groups
|
||||
# kube_oidc_groups_prefix: 'oidc:'
|
||||
|
||||
## Variables to control webhook authn/authz
|
||||
# kube_webhook_token_auth: false
|
||||
# kube_webhook_token_auth_url: https://...
|
||||
# kube_webhook_token_auth_url_skip_tls_verify: false
|
||||
|
||||
## For webhook authorization, authorization_modes must include Webhook
|
||||
# kube_webhook_authorization: false
|
||||
# kube_webhook_authorization_url: https://...
|
||||
# kube_webhook_authorization_url_skip_tls_verify: false
|
||||
|
||||
# Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
|
||||
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
|
||||
kube_network_plugin: calico
|
||||
|
||||
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
|
||||
kube_network_plugin_multus: false
|
||||
|
||||
# Kubernetes internal network for services, unused block of space.
|
||||
kube_service_addresses: 172.24.0.0/18
|
||||
|
||||
# internal network. When used, it will assign IP
|
||||
# addresses from this range to individual pods.
|
||||
# This network must be unused in your network infrastructure!
|
||||
kube_pods_subnet: 172.24.64.0/18
|
||||
|
||||
# internal network node size allocation (optional). This is the size allocated
|
||||
# to each node for pod IP address allocation. Note that the number of pods per node is
|
||||
# also limited by the kubelet_max_pods variable which defaults to 110.
|
||||
#
|
||||
# Example:
|
||||
# Up to 64 nodes and up to 254 or kubelet_max_pods (the lowest of the two) pods per node:
|
||||
# - kube_pods_subnet: 10.233.64.0/18
|
||||
# - kube_network_node_prefix: 24
|
||||
# - kubelet_max_pods: 110
|
||||
#
|
||||
# Example:
|
||||
# Up to 128 nodes and up to 126 or kubelet_max_pods (the lowest of the two) pods per node:
|
||||
# - kube_pods_subnet: 10.233.64.0/18
|
||||
# - kube_network_node_prefix: 25
|
||||
# - kubelet_max_pods: 110
|
||||
kube_network_node_prefix: 24
|
||||
|
||||
# Configure Dual Stack networking (i.e. both IPv4 and IPv6)
|
||||
enable_dual_stack_networks: false
|
||||
|
||||
# Kubernetes internal network for IPv6 services, unused block of space.
|
||||
# This is only used if enable_dual_stack_networks is set to true
|
||||
# This provides 4096 IPv6 IPs
|
||||
kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116
|
||||
|
||||
# Internal network. When used, it will assign IPv6 addresses from this range to individual pods.
|
||||
# This network must not already be in your network infrastructure!
|
||||
# This is only used if enable_dual_stack_networks is set to true.
|
||||
# This provides room for 256 nodes with 254 pods per node.
|
||||
kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112
|
||||
|
||||
# IPv6 subnet size allocated to each for pods.
|
||||
# This is only used if enable_dual_stack_networks is set to true
|
||||
# This provides room for 254 pods per node.
|
||||
kube_network_node_prefix_ipv6: 120
|
||||
|
||||
# The port the API Server will be listening on.
|
||||
kube_apiserver_ip: "{{ kube_service_addresses | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(1) | ansible.utils.ipaddr('address') }}"
|
||||
kube_apiserver_port: 6443 # (https)
|
||||
|
||||
# Kube-proxy proxyMode configuration.
|
||||
# Can be ipvs, iptables
|
||||
kube_proxy_mode: ipvs
|
||||
|
||||
# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
|
||||
# must be set to true for MetalLB, kube-vip(ARP enabled) to work
|
||||
kube_proxy_strict_arp: false
|
||||
|
||||
# A string slice of values which specify the addresses to use for NodePorts.
|
||||
# Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32).
|
||||
# The default empty string slice ([]) means to use all local addresses.
|
||||
# kube_proxy_nodeport_addresses_cidr is retained for legacy config
|
||||
kube_proxy_nodeport_addresses: >-
|
||||
{%- if kube_proxy_nodeport_addresses_cidr is defined -%}
|
||||
[{{ kube_proxy_nodeport_addresses_cidr }}]
|
||||
{%- else -%}
|
||||
[]
|
||||
{%- endif -%}
|
||||
|
||||
# If non-empty, will use this string as identification instead of the actual hostname
|
||||
# kube_override_hostname: >-
|
||||
# {%- if cloud_provider is defined and cloud_provider in ['aws'] -%}
|
||||
# {%- else -%}
|
||||
# {{ inventory_hostname }}
|
||||
# {%- endif -%}
|
||||
|
||||
## Encrypting Secret Data at Rest
|
||||
kube_encrypt_secret_data: false
|
||||
|
||||
# Graceful Node Shutdown (Kubernetes >= 1.21.0), see https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/
|
||||
# kubelet_shutdown_grace_period had to be greater than kubelet_shutdown_grace_period_critical_pods to allow
|
||||
# non-critical podsa to also terminate gracefully
|
||||
# kubelet_shutdown_grace_period: 60s
|
||||
# kubelet_shutdown_grace_period_critical_pods: 20s
|
||||
|
||||
# DNS configuration.
|
||||
# Kubernetes cluster name, also will be used as DNS domain
|
||||
cluster_name: k8s-avroid-office.prod.local
|
||||
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
|
||||
ndots: 2
|
||||
# dns_timeout: 2
|
||||
# dns_attempts: 2
|
||||
# Custom search domains to be added in addition to the default cluster search domains
|
||||
# searchdomains:
|
||||
# - svc.{{ cluster_name }}
|
||||
# - default.svc.{{ cluster_name }}
|
||||
# Remove default cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
|
||||
# remove_default_searchdomains: false
|
||||
# Can be coredns, coredns_dual, manual or none
|
||||
dns_mode: coredns_dual
|
||||
# Set manual server if using a custom cluster DNS server
|
||||
# manual_dns_server: 10.x.x.x
|
||||
# Enable nodelocal dns cache
|
||||
enable_nodelocaldns: true
|
||||
enable_nodelocaldns_secondary: false
|
||||
nodelocaldns_ip: 169.254.25.10
|
||||
nodelocaldns_health_port: 9254
|
||||
nodelocaldns_second_health_port: 9256
|
||||
nodelocaldns_bind_metrics_host_ip: false
|
||||
nodelocaldns_secondary_skew_seconds: 5
|
||||
nodelocaldns_external_zones:
|
||||
- zones:
|
||||
- avroid.tech
|
||||
- avroid.team
|
||||
- avroid.cloud
|
||||
- adlinux.store
|
||||
- o2linux.org
|
||||
nameservers:
|
||||
- 10.2.4.10
|
||||
- 10.2.4.20
|
||||
- 10.3.0.101
|
||||
cache: 5
|
||||
# Enable k8s_external plugin for CoreDNS
|
||||
enable_coredns_k8s_external: false
|
||||
coredns_k8s_external_zone: k8s_external.local
|
||||
# Enable endpoint_pod_names option for kubernetes plugin
|
||||
enable_coredns_k8s_endpoint_pod_names: false
|
||||
# Set forward options for upstream DNS servers in coredns (and nodelocaldns) config
|
||||
# dns_upstream_forward_extra_opts:
|
||||
# policy: sequential
|
||||
# Apply extra options to coredns kubernetes plugin
|
||||
# coredns_kubernetes_extra_opts:
|
||||
# - 'fallthrough example.local'
|
||||
# Forward extra domains to the coredns kubernetes plugin
|
||||
# coredns_kubernetes_extra_domains: ''
|
||||
|
||||
coredns_external_zones:
|
||||
- zones:
|
||||
- avroid.tech
|
||||
- avroid.team
|
||||
- avroid.cloud
|
||||
- adlinux.store
|
||||
- o2linux.org
|
||||
nameservers:
|
||||
- 10.2.4.10
|
||||
- 10.2.4.20
|
||||
- 10.3.0.101
|
||||
cache: 5
|
||||
|
||||
# Can be docker_dns, host_resolvconf or none
|
||||
resolvconf_mode: host_resolvconf
|
||||
# Deploy netchecker app to verify DNS resolve as an HTTP service
|
||||
deploy_netchecker: true
|
||||
# Ip address of the kubernetes skydns service
|
||||
skydns_server: "{{ kube_service_addresses | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(3) | ansible.utils.ipaddr('address') }}"
|
||||
skydns_server_secondary: "{{ kube_service_addresses | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(4) | ansible.utils.ipaddr('address') }}"
|
||||
dns_domain: "{{ cluster_name }}"
|
||||
|
||||
## Container runtime
|
||||
## docker for docker, crio for cri-o and containerd for containerd.
|
||||
## Default: containerd
|
||||
container_manager: containerd
|
||||
|
||||
# Additional container runtimes
|
||||
kata_containers_enabled: false
|
||||
|
||||
kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}"
|
||||
|
||||
# K8s image pull policy (imagePullPolicy)
|
||||
k8s_image_pull_policy: IfNotPresent
|
||||
|
||||
# audit log for kubernetes
|
||||
kubernetes_audit: false
|
||||
|
||||
# define kubelet config dir for dynamic kubelet
|
||||
# kubelet_config_dir:
|
||||
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
|
||||
|
||||
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
|
||||
# kubeconfig_localhost: false
|
||||
# Use ansible_host as external api ip when copying over kubeconfig.
|
||||
# kubeconfig_localhost_ansible_host: false
|
||||
# Download kubectl onto the host that runs Ansible in {{ bin_dir }}
|
||||
kubectl_localhost: false
|
||||
|
||||
# A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
|
||||
# Acceptable options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
|
||||
# kubelet_enforce_node_allocatable: pods
|
||||
|
||||
## Set runtime and kubelet cgroups when using systemd as cgroup driver (default)
|
||||
# kubelet_runtime_cgroups: "/{{ kube_service_cgroups }}/{{ container_manager }}.service"
|
||||
# kubelet_kubelet_cgroups: "/{{ kube_service_cgroups }}/kubelet.service"
|
||||
|
||||
## Set runtime and kubelet cgroups when using cgroupfs as cgroup driver
|
||||
# kubelet_runtime_cgroups_cgroupfs: "/system.slice/{{ container_manager }}.service"
|
||||
# kubelet_kubelet_cgroups_cgroupfs: "/system.slice/kubelet.service"
|
||||
|
||||
# Optionally reserve this space for kube daemons.
|
||||
kube_reserved: true
|
||||
## Uncomment to override default values
|
||||
## The following two items need to be set when kube_reserved is true
|
||||
kube_reserved_cgroups_for_service_slice: kube.slice
|
||||
kube_reserved_cgroups: "/{{ kube_reserved_cgroups_for_service_slice }}"
|
||||
kube_memory_reserved: 256Mi
|
||||
kube_cpu_reserved: 100m
|
||||
kube_ephemeral_storage_reserved: 2Gi
|
||||
# kube_pid_reserved: "1000"
|
||||
# Reservation for master hosts
|
||||
kube_master_memory_reserved: 512Mi
|
||||
kube_master_cpu_reserved: 200m
|
||||
kube_master_ephemeral_storage_reserved: 2Gi
|
||||
# kube_master_pid_reserved: "1000"
|
||||
|
||||
## Optionally reserve resources for OS system daemons.
|
||||
system_reserved: true
|
||||
## Uncomment to override default values
|
||||
## The following two items need to be set when system_reserved is true
|
||||
system_reserved_cgroups_for_service_slice: system.slice
|
||||
system_reserved_cgroups: "/{{ system_reserved_cgroups_for_service_slice }}"
|
||||
system_memory_reserved: 512Mi
|
||||
system_cpu_reserved: 500m
|
||||
system_ephemeral_storage_reserved: 2Gi
|
||||
## Reservation for master hosts
|
||||
system_master_memory_reserved: 256Mi
|
||||
system_master_cpu_reserved: 250m
|
||||
system_master_ephemeral_storage_reserved: 2Gi
|
||||
|
||||
## Eviction Thresholds to avoid system OOMs
|
||||
# https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds
|
||||
# eviction_hard: {}
|
||||
# eviction_hard_control_plane: {}
|
||||
|
||||
# An alternative flexvolume plugin directory
|
||||
# kubelet_flexvolumes_plugins_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
|
||||
|
||||
## Supplementary addresses that can be added in kubernetes ssl keys.
|
||||
## That can be useful for example to setup a keepalived virtual IP
|
||||
# supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3]
|
||||
|
||||
## Running on top of openstack vms with cinder enabled may lead to unschedulable pods due to NoVolumeZoneConflict restriction in kube-scheduler.
|
||||
## See https://github.com/kubernetes-sigs/kubespray/issues/2141
|
||||
## Set this variable to true to get rid of this issue
|
||||
volume_cross_zone_attachment: false
|
||||
## Add Persistent Volumes Storage Class for corresponding cloud provider (supported: in-tree OpenStack, Cinder CSI,
|
||||
## AWS EBS CSI, Azure Disk CSI, GCP Persistent Disk CSI)
|
||||
persistent_volumes_enabled: false
|
||||
|
||||
## Container Engine Acceleration
|
||||
## Enable container acceleration feature, for example use gpu acceleration in containers
|
||||
# nvidia_accelerator_enabled: true
|
||||
## Nvidia GPU driver install. Install will by done by a (init) pod running as a daemonset.
|
||||
## Important: if you use Ubuntu then you should set in all.yml 'docker_storage_options: -s overlay2'
|
||||
## Array with nvida_gpu_nodes, leave empty or comment if you don't want to install drivers.
|
||||
## Labels and taints won't be set to nodes if they are not in the array.
|
||||
# nvidia_gpu_nodes:
|
||||
# - kube-gpu-001
|
||||
# nvidia_driver_version: "384.111"
|
||||
## flavor can be tesla or gtx
|
||||
# nvidia_gpu_flavor: gtx
|
||||
## NVIDIA driver installer images. Change them if you have trouble accessing gcr.io.
|
||||
# nvidia_driver_install_centos_container: atzedevries/nvidia-centos-driver-installer:2
|
||||
# nvidia_driver_install_ubuntu_container: gcr.io/google-containers/ubuntu-nvidia-driver-installer@sha256:7df76a0f0a17294e86f691c81de6bbb7c04a1b4b3d4ea4e7e2cccdc42e1f6d63
|
||||
## NVIDIA GPU device plugin image.
|
||||
# nvidia_gpu_device_plugin_container: "registry.k8s.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e"
|
||||
|
||||
## Support tls min version, Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
|
||||
tls_min_version: "VersionTLS12"
|
||||
|
||||
## Support tls cipher suites.
|
||||
# tls_cipher_suites: {}
|
||||
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
|
||||
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
|
||||
# - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
|
||||
# - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
|
||||
# - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
|
||||
# - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
|
||||
# - TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
|
||||
# - TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
|
||||
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
|
||||
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
|
||||
# - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
|
||||
# - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
|
||||
# - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
|
||||
# - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
|
||||
# - TLS_ECDHE_RSA_WITH_RC4_128_SHA
|
||||
# - TLS_RSA_WITH_3DES_EDE_CBC_SHA
|
||||
# - TLS_RSA_WITH_AES_128_CBC_SHA
|
||||
# - TLS_RSA_WITH_AES_128_CBC_SHA256
|
||||
# - TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
# - TLS_RSA_WITH_AES_256_CBC_SHA
|
||||
# - TLS_RSA_WITH_AES_256_GCM_SHA384
|
||||
# - TLS_RSA_WITH_RC4_128_SHA
|
||||
|
||||
## Amount of time to retain events. (default 1h0m0s)
|
||||
event_ttl_duration: "1h0m0s"
|
||||
|
||||
## Automatically renew K8S control plane certificates on first Monday of each month
|
||||
auto_renew_certificates: false
|
||||
# First Monday of each month
|
||||
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
|
||||
|
||||
# kubeadm patches path
|
||||
kubeadm_patches:
|
||||
enabled: false
|
||||
source_dir: "{{ inventory_dir }}/patches"
|
||||
dest_dir: "{{ kube_config_dir }}/patches"
|
||||
|
||||
# Set to true to remove the role binding to anonymous users created by kubeadm
|
||||
remove_anonymous_access: false
|
||||
132
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/k8s_cluster/k8s-net-calico.yml
vendored
Normal file
132
env/avroid_prod/k8s-avroid-office.prod.local/inventory/group_vars/k8s_cluster/k8s-net-calico.yml
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
---
|
||||
# see roles/network_plugin/calico/defaults/main.yml
|
||||
|
||||
# the default value of name
|
||||
calico_cni_name: k8s-pod-network
|
||||
|
||||
## With calico it is possible to distributed routes with border routers of the datacenter.
|
||||
## Warning : enabling router peering will disable calico's default behavior ('node mesh').
|
||||
## The subnets of each nodes will be distributed by the datacenter router
|
||||
# peer_with_router: false
|
||||
|
||||
# Enables Internet connectivity from containers
|
||||
# nat_outgoing: true
|
||||
# nat_outgoing_ipv6: false
|
||||
|
||||
# Enables Calico CNI "host-local" IPAM plugin
|
||||
# calico_ipam_host_local: true
|
||||
|
||||
# add default ippool name
|
||||
# calico_pool_name: "default-pool"
|
||||
|
||||
# add default ippool blockSize (defaults kube_network_node_prefix)
|
||||
calico_pool_blocksize: 26
|
||||
|
||||
# add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise)
|
||||
# calico_pool_cidr: 1.2.3.4/5
|
||||
|
||||
# add default ippool CIDR to CNI config
|
||||
# calico_cni_pool: true
|
||||
|
||||
# Add default IPV6 IPPool CIDR. Must be inside kube_pods_subnet_ipv6. Defaults to kube_pods_subnet_ipv6 if not set.
|
||||
# calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
|
||||
|
||||
# Add default IPV6 IPPool CIDR to CNI config
|
||||
# calico_cni_pool_ipv6: true
|
||||
|
||||
# Global as_num (/calico/bgp/v1/global/as_num)
|
||||
# global_as_num: "64512"
|
||||
|
||||
# If doing peering with node-assigned asn where the globas does not match your nodes, you want this
|
||||
# to be true. All other cases, false.
|
||||
# calico_no_global_as_num: false
|
||||
|
||||
# You can set MTU value here. If left undefined or empty, it will
|
||||
# not be specified in calico CNI config, so Calico will use built-in
|
||||
# defaults. The value should be a number, not a string.
|
||||
# calico_mtu: 1500
|
||||
|
||||
# Configure the MTU to use for workload interfaces and tunnels.
|
||||
# - If Wireguard is enabled, subtract 60 from your network MTU (i.e 1500-60=1440)
|
||||
# - Otherwise, if VXLAN or BPF mode is enabled, subtract 50 from your network MTU (i.e. 1500-50=1450)
|
||||
# - Otherwise, if IPIP is enabled, subtract 20 from your network MTU (i.e. 1500-20=1480)
|
||||
# - Otherwise, if not using any encapsulation, set to your network MTU (i.e. 1500)
|
||||
# calico_veth_mtu: 1440
|
||||
|
||||
# Advertise Cluster IPs
|
||||
# calico_advertise_cluster_ips: true
|
||||
|
||||
# Advertise Service External IPs
|
||||
# calico_advertise_service_external_ips:
|
||||
# - x.x.x.x/24
|
||||
# - y.y.y.y/32
|
||||
|
||||
# Advertise Service LoadBalancer IPs
|
||||
# calico_advertise_service_loadbalancer_ips:
|
||||
# - x.x.x.x/24
|
||||
# - y.y.y.y/16
|
||||
|
||||
# Choose data store type for calico: "etcd" or "kdd" (kubernetes datastore)
|
||||
# calico_datastore: "kdd"
|
||||
|
||||
# Choose Calico iptables backend: "Legacy", "Auto" or "NFT"
|
||||
# calico_iptables_backend: "Auto"
|
||||
|
||||
# Use typha (only with kdd)
|
||||
# typha_enabled: false
|
||||
|
||||
# Generate TLS certs for secure typha<->calico-node communication
|
||||
# typha_secure: false
|
||||
|
||||
# Scaling typha: 1 replica per 100 nodes is adequate
|
||||
# Number of typha replicas
|
||||
# typha_replicas: 1
|
||||
|
||||
# Set max typha connections
|
||||
# typha_max_connections_lower_limit: 300
|
||||
|
||||
# Set calico network backend: "bird", "vxlan" or "none"
|
||||
# bird enable BGP routing, required for ipip and no encapsulation modes
|
||||
# calico_network_backend: vxlan
|
||||
|
||||
# IP in IP and VXLAN is mutually exclusive modes.
|
||||
# set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never"
|
||||
# calico_ipip_mode: 'Never'
|
||||
|
||||
# set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never"
|
||||
# calico_vxlan_mode: 'Always'
|
||||
|
||||
# set VXLAN port and VNI
|
||||
# calico_vxlan_vni: 4096
|
||||
# calico_vxlan_port: 4789
|
||||
|
||||
# Enable eBPF mode
|
||||
# calico_bpf_enabled: false
|
||||
|
||||
# If you want to use non default IP_AUTODETECTION_METHOD, IP6_AUTODETECTION_METHOD for calico node set this option to one of:
|
||||
# * can-reach=DESTINATION
|
||||
# * interface=INTERFACE-REGEX
|
||||
# see https://docs.projectcalico.org/reference/node/configuration
|
||||
# calico_ip_auto_method: "interface=eth.*"
|
||||
# calico_ip6_auto_method: "interface=eth.*"
|
||||
|
||||
# Set FELIX_MTUIFACEPATTERN, Pattern used to discover the host’s interface for MTU auto-detection.
|
||||
# see https://projectcalico.docs.tigera.io/reference/felix/configuration
|
||||
# calico_felix_mtu_iface_pattern: "^((en|wl|ww|sl|ib)[opsx].*|(eth|wlan|wwan).*)"
|
||||
|
||||
# Choose the iptables insert mode for Calico: "Insert" or "Append".
|
||||
# calico_felix_chaininsertmode: Insert
|
||||
|
||||
# If you want use the default route interface when you use multiple interface with dynamique route (iproute2)
|
||||
# see https://docs.projectcalico.org/reference/node/configuration : FELIX_DEVICEROUTESOURCEADDRESS
|
||||
# calico_use_default_route_src_ipaddr: false
|
||||
|
||||
# Enable calico traffic encryption with wireguard
|
||||
# calico_wireguard_enabled: false
|
||||
|
||||
# Under certain situations liveness and readiness probes may need tunning
|
||||
# calico_node_livenessprobe_timeout: 10
|
||||
# calico_node_readinessprobe_timeout: 10
|
||||
|
||||
# Calico apiserver (only with kdd)
|
||||
# calico_apiserver_enabled: false
|
||||
55
env/avroid_prod/k8s-avroid-office.prod.local/inventory/inventory.ini
vendored
Normal file
55
env/avroid_prod/k8s-avroid-office.prod.local/inventory/inventory.ini
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
# ## Configure 'ip' variable to bind kubernetes services on a
|
||||
# ## different ip than the default iface
|
||||
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
|
||||
[all]
|
||||
k8s-control-01 ansible_host=k8s-control-01.avroid.tech ip=10.2.20.31 etcd_member_name=etcd1
|
||||
k8s-control-02 ansible_host=k8s-control-02.avroid.tech ip=10.2.20.32 etcd_member_name=etcd2
|
||||
k8s-control-03 ansible_host=k8s-control-03.avroid.tech ip=10.2.20.33 etcd_member_name=etcd3
|
||||
k8s-worker-01 ansible_host=k8s-worker-01.avroid.tech
|
||||
k8s-worker-02 ansible_host=k8s-worker-02.avroid.tech
|
||||
k8s-worker-03 ansible_host=k8s-worker-03.avroid.tech
|
||||
k8s-build-01 ansible_host=k8s-build-01.avroid.tech
|
||||
k8s-build-02 ansible_host=k8s-build-02.avroid.tech
|
||||
k8s-build-03 ansible_host=k8s-build-03.avroid.tech
|
||||
k8s-build-04 ansible_host=k8s-build-04.avroid.tech
|
||||
k8s-build-05 ansible_host=k8s-build-05.avroid.tech
|
||||
k8s-build-06 ansible_host=k8s-build-06.avroid.tech
|
||||
k8s-build-07 ansible_host=k8s-build-07.avroid.tech
|
||||
|
||||
# ## configure a bastion host if your nodes are not directly reachable
|
||||
# [bastion]
|
||||
# bastion ansible_host=x.x.x.x ansible_user=some_user
|
||||
|
||||
[kube_control_plane]
|
||||
k8s-control-01
|
||||
k8s-control-02
|
||||
k8s-control-03
|
||||
|
||||
[etcd]
|
||||
k8s-control-01
|
||||
k8s-control-02
|
||||
k8s-control-03
|
||||
|
||||
[custom_kube_node_with_ingress]
|
||||
k8s-worker-01
|
||||
k8s-worker-02
|
||||
k8s-worker-03
|
||||
|
||||
[kube_node]
|
||||
k8s-build-01
|
||||
k8s-build-02
|
||||
k8s-build-03
|
||||
k8s-build-04
|
||||
k8s-build-05
|
||||
k8s-build-06
|
||||
k8s-build-07
|
||||
|
||||
[kube_node:children]
|
||||
custom_kube_node_with_ingress
|
||||
|
||||
#[calico_rr]
|
||||
|
||||
[k8s_cluster:children]
|
||||
kube_control_plane
|
||||
kube_node
|
||||
#calico_rr
|
||||
@@ -0,0 +1,8 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kube-controller-manager
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '10257'
|
||||
8
env/avroid_prod/k8s-avroid-office.prod.local/inventory/patches/kube-scheduler+merge.yaml
vendored
Normal file
8
env/avroid_prod/k8s-avroid-office.prod.local/inventory/patches/kube-scheduler+merge.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kube-scheduler
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '10259'
|
||||
1
env/avroid_prod/k8s-avroid-office.prod.local/kubespray
vendored
Submodule
1
env/avroid_prod/k8s-avroid-office.prod.local/kubespray
vendored
Submodule
Submodule env/avroid_prod/k8s-avroid-office.prod.local/kubespray added at f9ebd45c74
8
env/avroid_prod/k8s-avroid-office.prod.local/namespaces/sandbox/sandbox-namespace.yaml
vendored
Normal file
8
env/avroid_prod/k8s-avroid-office.prod.local/namespaces/sandbox/sandbox-namespace.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: sandbox
|
||||
labels:
|
||||
name: sandbox
|
||||
app.kubernetes.io/managed-by: manual
|
||||
13
env/avroid_prod/k8s-avroid-office.prod.local/namespaces/sandbox/sandbox-resourcequota.yaml
vendored
Normal file
13
env/avroid_prod/k8s-avroid-office.prod.local/namespaces/sandbox/sandbox-resourcequota.yaml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: sandbox
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: manual
|
||||
spec:
|
||||
hard:
|
||||
requests.cpu: "8"
|
||||
requests.memory: 24Gi
|
||||
limits.cpu: "16"
|
||||
limits.memory: 32Gi
|
||||
Reference in New Issue
Block a user