[DO-143] Final prepare for env k8s avroid_prod (!2)

DO-1431

Co-authored-by: denis.patrakeev <denis.patrakeev@avroid.tech>
Reviewed-on: https://git.avroid.tech/K8s/k8s-deploy/pulls/2
This commit is contained in:
Denis Patrakeev
2024-12-20 19:44:28 +03:00
parent 1a23ae209b
commit d4535fb8bc
33 changed files with 265 additions and 1135 deletions

4
.gitignore vendored
View File

@@ -23,6 +23,10 @@ ansible_collections
.venv
venv*
.kubespray-venv
kubespray-venv*
**/.kubespray-venv
**/kubespray-venv*
__pycache__
*~

2
.gitmodules vendored
View File

@@ -1,4 +1,4 @@
[submodule "env/avroid_prod/kubespray"]
path = env/avroid_prod/kubespray
url = ssh://git@git.avroid.tech:2222/Mirrors/kubespray.git
branch = v2.25.1
branch = v2.26.0

View File

@@ -65,7 +65,7 @@ git submodule add ssh://git@git.avroid.tech:2222/Mirrors/kubespray.git kubespray
После чего принудительно переключаем Git Submodule на нужный тэг (релиз) Kubespray:
```bash
cd env/<ОКРУЖЕНИЕ_XX>/kubespray
git checkout v2.25.1
git checkout v2.26.0
cd ../../..
git add env/<ОКРУЖЕНИЕ_XX>/kubespray
```
@@ -78,10 +78,10 @@ git add env/<ОКРУЖЕНИЕ_XX>/kubespray
[submodule "env/<ОКРУЖЕНИЕ_ХХ>/kubespray"]
path = env/<ОКРУЖЕНИЕ_ХХ>/kubespray
url = ssh://git@git.avroid.tech:2222/Mirrors/kubespray.git
branch = v2.25.1
branch = v2.26.0
```
После чего фиксируем новое состояние:
```bash
git commit -m "[DO-XXXX] Checked out tag v2.25.1 kubespray for env XXX"
git commit -m "[DO-XXXX] Checked out tag v2.26.0 kubespray for env XXX"
```

View File

@@ -3,52 +3,41 @@
[Requirements](./kubespray/README.md#requirements)
## Версия Kuberspray и Kubernetes у текущих инвентору
| Kuberspray | v2.25.1 |
|------------|----------|
| Kubernetes | v1.29.10 |
| Kuberspray | v2.26.0 |
|------------|---------|
| Kubernetes | v1.30.4 |
# TODO: ???
## Особенности развертывания кластера
| Модуль | Комментарий |
|------------------------|------------------------------------------------------------------------------|
| Cluster name | k8s.avroid.local |
|--------------------------|------------------------------------------------------------------------------------------|
| Cluster name | k8s.prod.local |
| Сеть | Только IPv4 |
| Сеть | 172.24.0.0/18 - подсеть сервисов |
| Сеть | 172.24.64.0/18 - подсеть подов |
| Сеть | 10000-32767 - список портов, разрешённый к форвардингу на нодах |
| Маска подсети на ноду | 25 (Итого - max 126 на ноде и max 128 нод) |
| Сеть | 30000-32767 - список портов, разрешённый к форвардингу на нодах |
| Маска подсети на ноду | 24 (Итого - max 254 подов на ноде и max 64 ноды) |
| CNI | calico |
| DNS zone | k8s.<ОКРУЖЕНИЕ_XX>.local |
| NTP-клиенты | Настроены на локальные приватные NTP-сервера и московскую таймзону |
| DNS zone | k8s.prod.local |
| DNS | Dual CoreDNS + nodelocaldns |
| Etcd | данные сервиса в /data/etcd на отдельном блочном устройстве с ext4) |
| Core | containerd (/var/lib/containerd на отдельном блочном устройстве с XFS) |
| Приватные регистри | nexus.local.club в настройках |
| Container runtime | containerd (/var/lib/containerd на отдельном блочном устройстве с XFS) |
| Приватный реестр образов | Используются приватные кеширующие зеркала с harbor.avroid.tech в настройках containerd |
| Диски | Все ноды: /var/lib/containerd вынесен на отдельные блочное устройства с XFS |
| Диски | k8s-control-0X: /data вынесен на отдельные блочное устройства с ext4 |
| Диски | k8s-worker/build-0X: /var/lib/kubelet/pods вынесен на отдельные блочное устройства с XFS |
| HA | API Server |
| NTP | Настроен с российскими серверами и Московской таймзоной |
| Ingress | Nginx ingress controller 80 --> 30100 (Node), 443 --> 30101 (Node) |
| Дополнительные сервисы | Kubernetes dashboard, Helm, Metrics Server, Cert manager, netchecker |
| netchecker | netchecker |
| Local storage | Локальный диск на master-нодах для Prometheus через local_volume_provisioner |
| Ingress | Nginx ingress controller 80 --> 30080 (k8s-worker-0X), 443 --> 30081 (k8s-worker-0X) |
| Дополнительные сервисы | Helm, Metrics Server, Cert manager, netchecker |
## Доступ до развёрнутых сервисов
### Kubernetes Dashboard:
[Kubespray docs: Accessing Kubernetes Dashboard](./kubespray/docs/getting_started/getting-started.md#accessing-kubernetes-dashboard)
[Official docs: Accessing Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui)
### Ingress NGINX Controller
https://github.com/kubernetes/ingress-nginx/blob/main/README.md#readme
С кастомными патчами из ./pathes/ingress_nginx
<worker_node>:30080/TCP --> nginx:80/TCP
<worker_node>:30100/TCP --> nginx:80/TCP
<worker_node>:30101/TCP --> nginx:443/TCP
### DNS
https://github.com/kubernetes/dns/blob/master/docs/specification.md
<worker_node>:53/UDP
<worker_node>:30081/TCP --> nginx:443/TCP
### netchecker
https://github.com/Mirantis/k8s-netchecker-server
@@ -60,7 +49,6 @@ http://<IP_АДРЕС_НОДЫ>:31081/api/v1/connectivity_check
http://<IP_АДРЕС_НОДЫ>:31081/metrics
## Подготовка окружения для развёртывания и развёртывание
### 1. Предварительная подготовка ВМ
@@ -75,17 +63,12 @@ http://<IP_АДРЕС_НОДЫ>:31081/metrics
```bash
cd env/<ОКРУЖЕНИЕ_XX>
git submodule update --init --recursive
cd kukbespray
git status
cd ../
```
### 3. Переходим в каталог с Kubespray
```bash
cd kubespray
git status
cd ../..
```
### 4. Готовим окружение Ansible
### 3. Готовим окружение Ansible
[Kubespray docs: Ansible Python Compatibility](./kubespray/docs/ansible/ansible.md#ansible-python-compatibility)
| Ansible Version | Python Version |
@@ -95,26 +78,25 @@ cd kubespray
| >=2.16.4 | 3.10-3.12 |
```bash
VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
virtualenv --python=$(which python3) -m venv $VENVDIR
cd env/<ОКРУЖЕНИЕ_XX>
export VENVDIR=kubespray-venv
export KUBESPRAYDIR=kubespray
python3 -m venv ./$VENVDIR
source $VENVDIR/bin/activate
pip3 install -U -r $KUBESPRAYDIR/requirements.txt
```
### 4. Запускаем раскатку кластера
```bash
cd env/<ОКРУЖЕНИЕ_XX>
export VENVDIR=kubespray-venv
export KUBESPRAYDIR=kubespray
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR
pip install -U -r requirements.txt
```
# TODO: ???
### 5. Копируем инвентори
```bash
cp -r <...>/inventory ./inventory/
```
### 6. Запускаем раскатку кластера
```bash
ansible-playbook cluster.yml -i ../inventory/inventory.ini -bkK -v
```
### 7. Копируем конфиг для подключения к кластеру через kubectl
### 5. Копируем конфиг для подключения к кластеру через kubectl
Копируем с любой из master-нод конфиг:
```text
/etc/kubernetes/admin.conf
@@ -124,11 +106,6 @@ ansible-playbook cluster.yml -i ../inventory/inventory.ini -bkK -v
Затем настраиваем любым удобным способом работу с кластером через него:
[Kubespray docs: Access the kubernetes cluster](./kubespray/docs/getting_started/setting-up-your-first-cluster.md#access-the-kubernetes-cluster)
### 8. Применяем кастомные патчи для Ingress NGINX
```bash
kubectl --kubeconfig='config_k8s.<ОКРУЖЕНИЕ_XX>.local' -n ingress-nginx apply -f ./pathes/ingress_nginx/svc-ingress-nginx-controller.yaml
kubectl --kubeconfig='config_k8s.<ОКРУЖЕНИЕ_XX>.local' -n ingress-nginx apply -f ./pathes/ingress_nginx/ic-ingress-nginx.yaml
```
## Дополнительные действия с кластером через Kubespray
Дополнительные теги:

View File

@@ -17,9 +17,9 @@ bin_dir: /usr/local/bin
# port: 1234
## Internal loadbalancers for apiservers
# loadbalancer_apiserver_localhost: true
loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
# loadbalancer_apiserver_type: nginx # valid values "nginx" or "haproxy"
loadbalancer_apiserver_type: nginx
## Local loadbalancer should use this port
## And must be set port 6443
@@ -36,9 +36,10 @@ loadbalancer_apiserver_healthcheck_port: 8081
# disable_host_nameservers: false
## Upstream dns servers
# upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
upstream_dns_servers:
- 10.2.4.10
- 10.2.4.20
- 10.3.0.101
## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
@@ -83,7 +84,7 @@ no_proxy_exclude_workers: false
## This setting determines whether certs are generated via scripts.
## Chose 'none' if you provide your own certificates.
## Option is "script", "none"
# cert_management: script
cert_management: script
## Set to true to allow pre-checks to fail and continue deployment
# ignore_assert_errors: false
@@ -92,7 +93,7 @@ no_proxy_exclude_workers: false
# kube_read_only_port: 10255
## Set true to download and cache container
# download_container: true
download_container: true
## Deploy container engine
# Set false if you want to deploy container engine manually.
@@ -124,13 +125,14 @@ kube_webhook_token_auth_url_skip_tls_verify: false
## NTP Settings
# Start the ntpd or chrony service and enable it at system boot.
ntp_enabled: false
ntp_manage_config: false
ntp_enabled: true
ntp_manage_config: true
ntp_servers:
- "0.pool.ntp.org iburst"
- "1.pool.ntp.org iburst"
- "2.pool.ntp.org iburst"
- "3.pool.ntp.org iburst"
- "ntp-01.avroid.tech iburst"
- "ntp-02.avroid.tech iburst"
- "ntp-03.avroid.tech iburst"
# Set timezone
ntp_timezone: Europe/Moscow
## Used to control no_log attribute
unsafe_show_logs: false

View File

@@ -1,9 +0,0 @@
## To use AWS EBS CSI Driver to provision volumes, uncomment the first value
## and configure the parameters below
# aws_ebs_csi_enabled: true
# aws_ebs_csi_enable_volume_scheduling: true
# aws_ebs_csi_enable_volume_snapshot: false
# aws_ebs_csi_enable_volume_resizing: false
# aws_ebs_csi_controller_replicas: 1
# aws_ebs_csi_plugin_image_tag: latest
# aws_ebs_csi_extra_volume_tags: "Owner=owner,Team=team,Environment=environment'

View File

@@ -1,40 +0,0 @@
## When azure is used, you need to also set the following variables.
## see docs/azure.md for details on how to get these values
# azure_cloud:
# azure_tenant_id:
# azure_subscription_id:
# azure_aad_client_id:
# azure_aad_client_secret:
# azure_resource_group:
# azure_location:
# azure_subnet_name:
# azure_security_group_name:
# azure_security_group_resource_group:
# azure_vnet_name:
# azure_vnet_resource_group:
# azure_route_table_name:
# azure_route_table_resource_group:
# supported values are 'standard' or 'vmss'
# azure_vmtype: standard
## Azure Disk CSI credentials and parameters
## see docs/azure-csi.md for details on how to get these values
# azure_csi_tenant_id:
# azure_csi_subscription_id:
# azure_csi_aad_client_id:
# azure_csi_aad_client_secret:
# azure_csi_location:
# azure_csi_resource_group:
# azure_csi_vnet_name:
# azure_csi_vnet_resource_group:
# azure_csi_subnet_name:
# azure_csi_security_group_name:
# azure_csi_use_instance_metadata:
# azure_csi_tags: "Owner=owner,Team=team,Environment=environment'
## To enable Azure Disk CSI, uncomment below
# azure_csi_enabled: true
# azure_csi_controller_replicas: 1
# azure_csi_plugin_image_tag: latest

View File

@@ -1,7 +1,7 @@
---
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options
# containerd_storage_dir: "/var/lib/containerd"
containerd_storage_dir: "/var/lib/containerd"
# containerd_state_dir: "/run/containerd"
# containerd_oom_score: 0
@@ -24,19 +24,62 @@
# containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216
# Containerd debug socket location: unix or tcp format
# containerd_debug_address: ""
# Containerd log level
# containerd_debug_level: "info"
# Containerd logs format, supported values: text, json
# containerd_debug_format: ""
# Containerd debug socket UID
# containerd_debug_uid: 0
# Containerd debug socket GID
# containerd_debug_gid: 0
# containerd_metrics_address: ""
# containerd_metrics_grpc_histogram: false
# Registries defined within containerd.
# containerd_registries_mirrors:
# - prefix: docker.io
# mirrors:
# - host: https://registry-1.docker.io
# capabilities: ["pull", "resolve"]
# skip_verify: false
containerd_registries_mirrors:
- prefix: docker.io
mirrors:
- host: https://mirror-gcr-io-proxy.avroid.tech
capabilities: [ "pull", "resolve" ]
skip_verify: false
- host: https://eu-central-1-mirror-aliyuncs-com-proxy.avroid.tech
capabilities: [ "pull", "resolve" ]
skip_verify: false
- host: https://registry-1.docker.io
capabilities: ["pull", "resolve"]
skip_verify: false
- prefix: quay.io
mirrors:
- host: https://quay-proxy.avroid.tech
capabilities: [ "pull", "resolve" ]
skip_verify: false
- host: https://quay.io
capabilities: [ "pull", "resolve" ]
skip_verify: false
- prefix: ghcr.io
mirrors:
- host: https://ghcr-proxy.avroid.tech
capabilities: [ "pull", "resolve" ]
skip_verify: false
- host: https://ghcr.io
capabilities: [ "pull", "resolve" ]
skip_verify: false
- prefix: registry.k8s.io
mirrors:
- host: https://registry-k8s-io-proxy.avroid.tech
capabilities: [ "pull", "resolve" ]
skip_verify: false
- host: https://registry.k8s.io
capabilities: [ "pull", "resolve" ]
skip_verify: false
# containerd_max_container_log_line_size: -1

View File

@@ -1,2 +0,0 @@
## Does coreos need auto upgrade, default is true
# coreos_auto_upgrade: true

View File

@@ -1,9 +0,0 @@
# Registries defined within cri-o.
# crio_insecure_registries:
# - 10.0.0.2:5000
# Auth config for the registries
# crio_registry_auth:
# - registry: 10.0.0.2:5000
# username: user
# password: pass

View File

@@ -1,59 +0,0 @@
---
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
# docker_storage_options: -s overlay2
## Enable docker_container_storage_setup, it will configure devicemapper driver on Centos7 or RedHat7.
docker_container_storage_setup: false
## It must be define a disk path for docker_container_storage_setup_devs.
## Otherwise docker-storage-setup will be executed incorrectly.
# docker_container_storage_setup_devs: /dev/vdb
## Uncomment this if you want to change the Docker Cgroup driver (native.cgroupdriver)
## Valid options are systemd or cgroupfs, default is systemd
# docker_cgroup_driver: systemd
## Only set this if you have more than 3 nameservers:
## If true Kubespray will only use the first 3, otherwise it will fail
docker_dns_servers_strict: false
# Path used to store Docker data
docker_daemon_graph: "/var/lib/docker"
## Used to set docker daemon iptables options to true
docker_iptables_enabled: "false"
# Docker log options
# Rotate container stderr/stdout logs at 50m and keep last 5
docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5"
# define docker bin_dir
docker_bin_dir: "/usr/bin"
# keep docker packages after installation; speeds up repeated ansible provisioning runs when '1'
# kubespray deletes the docker package on each run, so caching the package makes sense
docker_rpm_keepcache: 1
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define 172.19.16.11 or mirror.registry.io
# docker_insecure_registries:
# - mirror.registry.io
# - 172.19.16.11
## Add other registry,example China registry mirror.
# docker_registry_mirrors:
# - https://registry.docker-cn.com
# - https://mirror.aliyuncs.com
## If non-empty will override default system MountFlags value.
## This option takes a mount propagation flag: shared, slave
## or private, which control whether mounts in the file system
## namespace set up for docker will receive or propagate mounts
## and unmounts. Leave empty for system default
# docker_mount_flags:
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
# docker_options: ""

View File

@@ -1,6 +1,6 @@
---
## Directory where etcd data stored
etcd_data_dir: /var/lib/etcd
etcd_data_dir: /data/etcd
## Container runtime
## docker for docker, crio for cri-o and containerd for containerd.

View File

@@ -1,10 +0,0 @@
## GCP compute Persistent Disk CSI Driver credentials and parameters
## See docs/gcp-pd-csi.md for information about the implementation
## Specify the path to the file containing the service account credentials
# gcp_pd_csi_sa_cred_file: "/my/safe/credentials/directory/cloud-sa.json"
## To enable GCP Persistent Disk CSI driver, uncomment below
# gcp_pd_csi_enabled: true
# gcp_pd_csi_controller_replicas: 1
# gcp_pd_csi_driver_image_tag: "v0.7.0-gke.0"

View File

@@ -1,22 +0,0 @@
## Values for the external Hcloud Cloud Controller
# external_hcloud_cloud:
# hcloud_api_token: ""
# token_secret_name: hcloud
# with_networks: false # Use the hcloud controller-manager with networks support https://github.com/hetznercloud/hcloud-cloud-controller-manager#networks-support
# network_name: # network name/ID: If you manage the network yourself it might still be required to let the CCM know about private networks
# service_account_name: cloud-controller-manager
#
# controller_image_tag: "latest"
# ## A dictionary of extra arguments to add to the openstack cloud controller manager daemonset
# ## Format:
# ## external_hcloud_cloud.controller_extra_args:
# ## arg1: "value1"
# ## arg2: "value2"
# controller_extra_args: {}
#
# load_balancers_location: # mutually exclusive with load_balancers_network_zone
# load_balancers_network_zone:
# load_balancers_disable_private_ingress: # set to true if using IPVS based plugins https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/main/docs/load_balancers.md#sample-service-with-networks
# load_balancers_use_private_ip: # set to true if using private networks
# load_balancers_enabled:
# network_routes_enabled:

View File

@@ -1,17 +0,0 @@
## Values for the external Huawei Cloud Controller
# external_huaweicloud_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
# external_huaweicloud_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
## Credentials to authenticate against Keystone API
## All of them are required Per default these values will be
## read from the environment.
# external_huaweicloud_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
# external_huaweicloud_access_key: "{{ lookup('env','OS_ACCESS_KEY') }}"
# external_huaweicloud_secret_key: "{{ lookup('env','OS_SECRET_KEY') }}"
# external_huaweicloud_region: "{{ lookup('env','OS_REGION_NAME') }}"
# external_huaweicloud_project_id: "{{ lookup('env','OS_TENANT_ID')| default(lookup('env','OS_PROJECT_ID'),true) }}"
# external_huaweicloud_cloud: "{{ lookup('env','OS_CLOUD') }}"
## The repo and tag of the external Huawei Cloud Controller image
# external_huawei_cloud_controller_image_repo: "swr.ap-southeast-1.myhuaweicloud.com"
# external_huawei_cloud_controller_image_tag: "v0.26.8"

View File

@@ -1,28 +0,0 @@
## When Oracle Cloud Infrastructure is used, set these variables
# oci_private_key:
# oci_region_id:
# oci_tenancy_id:
# oci_user_id:
# oci_user_fingerprint:
# oci_compartment_id:
# oci_vnc_id:
# oci_subnet1_id:
# oci_subnet2_id:
## Override these default/optional behaviors if you wish
# oci_security_list_management: All
## If you would like the controller to manage specific lists per subnet. This is a mapping of subnet ocids to security list ocids. Below are examples.
# oci_security_lists:
# ocid1.subnet.oc1.phx.aaaaaaaasa53hlkzk6nzksqfccegk2qnkxmphkblst3riclzs4rhwg7rg57q: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
# ocid1.subnet.oc1.phx.aaaaaaaahuxrgvs65iwdz7ekwgg3l5gyah7ww5klkwjcso74u3e4i64hvtvq: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
## If oci_use_instance_principals is true, you do not need to set the region, tenancy, user, key, passphrase, or fingerprint
# oci_use_instance_principals: false
# oci_cloud_controller_version: 0.6.0
## If you would like to control OCI query rate limits for the controller
# oci_rate_limit:
# rate_limit_qps_read:
# rate_limit_qps_write:
# rate_limit_bucket_read:
# rate_limit_bucket_write:
## Other optional variables
# oci_cloud_controller_pull_source: (default iad.ocir.io/oracle/cloud-provider-oci)
# oci_cloud_controller_pull_secret: (name of pull secret to use if you define your own mirror above)

View File

@@ -18,7 +18,7 @@
# quay_image_repo: "{{ registry_host }}"
## Kubernetes components
# kubeadm_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
# kubeadm_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubeadm"
# kubectl_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl"
# kubelet_download_url: "{{ files_repo }}/dl.k8s.io/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
@@ -82,7 +82,7 @@
# krew_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/krew/releases/download/{{ krew_version }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz"
## CentOS/Redhat/AlmaLinux
### For EL7, base and extras repo must be available, for EL8, baseos and appstream
### For EL8, baseos and appstream must be available,
### By default we enable those repo automatically
# rhel_enable_repos: false
### Docker / Containerd

View File

@@ -1,72 +0,0 @@
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
# openstack_blockstorage_version: "v1/v2/auto (default)"
# openstack_blockstorage_ignore_volume_az: yes
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
# openstack_lbaas_enabled: True
# openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
## To enable automatic floating ip provisioning, specify a subnet.
# openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
## Override default LBaaS behavior
# openstack_lbaas_use_octavia: False
# openstack_lbaas_method: "ROUND_ROBIN"
# openstack_lbaas_provider: "haproxy"
# openstack_lbaas_create_monitor: "yes"
# openstack_lbaas_monitor_delay: "1m"
# openstack_lbaas_monitor_timeout: "30s"
# openstack_lbaas_monitor_max_retries: "3"
## Values for the external OpenStack Cloud Controller
# external_openstack_lbaas_enabled: true
# external_openstack_lbaas_floating_network_id: "Neutron network ID to get floating IP from"
# external_openstack_lbaas_floating_subnet_id: "Neutron subnet ID to get floating IP from"
# external_openstack_lbaas_method: ROUND_ROBIN
# external_openstack_lbaas_provider: amphora
# external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
# external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
# external_openstack_lbaas_manage_security_groups: false
# external_openstack_lbaas_create_monitor: false
# external_openstack_lbaas_monitor_delay: 5s
# external_openstack_lbaas_monitor_max_retries: 1
# external_openstack_lbaas_monitor_timeout: 3s
# external_openstack_lbaas_internal_lb: false
# external_openstack_network_ipv6_disabled: false
# external_openstack_network_internal_networks: []
# external_openstack_network_public_networks: []
# external_openstack_metadata_search_order: "configDrive,metadataService"
## Application credentials to authenticate against Keystone API
## Those settings will take precedence over username and password that might be set your environment
## All of them are required
# external_openstack_application_credential_name:
# external_openstack_application_credential_id:
# external_openstack_application_credential_secret:
## The tag of the external OpenStack Cloud Controller image
# external_openstack_cloud_controller_image_tag: "v1.28.2"
## Tags for the Cinder CSI images
## registry.k8s.io/sig-storage/csi-attacher
# cinder_csi_attacher_image_tag: "v4.4.2"
## registry.k8s.io/sig-storage/csi-provisioner
# cinder_csi_provisioner_image_tag: "v3.6.2"
## registry.k8s.io/sig-storage/csi-snapshotter
# cinder_csi_snapshotter_image_tag: "v6.3.2"
## registry.k8s.io/sig-storage/csi-resizer
# cinder_csi_resizer_image_tag: "v1.9.2"
## registry.k8s.io/sig-storage/livenessprobe
# cinder_csi_livenessprobe_image_tag: "v2.11.0"
## To use Cinder CSI plugin to provision volumes set this value to true
## Make sure to source in the openstack credentials
# cinder_csi_enabled: true
# cinder_csi_controller_replicas: 1
# storage_classes:
# - name: "cinder-csi"
# provisioner: "kubernetes.io/cinder"
# mount_options:
# - "discard"
# parameters:
# type: "thin"
# availability: "nova"
# reclaim_policy: "Delete"
# volume_binding_mode: "WaitForFirstConsumer"

View File

@@ -1,24 +0,0 @@
## Repo for UpClouds csi-driver: https://github.com/UpCloudLtd/upcloud-csi
## To use UpClouds CSI plugin to provision volumes set this value to true
## Remember to set UPCLOUD_USERNAME and UPCLOUD_PASSWORD
# upcloud_csi_enabled: true
# upcloud_csi_controller_replicas: 1
## Override used image tags
# upcloud_csi_provisioner_image_tag: "v3.1.0"
# upcloud_csi_attacher_image_tag: "v3.4.0"
# upcloud_csi_resizer_image_tag: "v1.4.0"
# upcloud_csi_plugin_image_tag: "v0.3.3"
# upcloud_csi_node_image_tag: "v2.5.0"
# upcloud_tolerations: []
## Storage class options
# storage_classes:
# - name: standard
# is_default: true
# expand_persistent_volumes: true
# parameters:
# tier: maxiops
# - name: hdd
# is_default: false
# expand_persistent_volumes: true
# parameters:
# tier: hdd

View File

@@ -1,32 +0,0 @@
## Values for the external vSphere Cloud Provider
# external_vsphere_vcenter_ip: "myvcenter.domain.com"
# external_vsphere_vcenter_port: "443"
# external_vsphere_insecure: "true"
# external_vsphere_user: "administrator@vsphere.local" # Can also be set via the `VSPHERE_USER` environment variable
# external_vsphere_password: "K8s_admin" # Can also be set via the `VSPHERE_PASSWORD` environment variable
# external_vsphere_datacenter: "DATACENTER_name"
# external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
## Vsphere version where located VMs
# external_vsphere_version: "6.7u3"
## Tags for the external vSphere Cloud Provider images
## gcr.io/cloud-provider-vsphere/cpi/release/manager
# external_vsphere_cloud_controller_image_tag: "latest"
## gcr.io/cloud-provider-vsphere/csi/release/syncer
# vsphere_syncer_image_tag: "v2.5.1"
## registry.k8s.io/sig-storage/csi-attacher
# vsphere_csi_attacher_image_tag: "v3.4.0"
## gcr.io/cloud-provider-vsphere/csi/release/driver
# vsphere_csi_controller: "v2.5.1"
## registry.k8s.io/sig-storage/livenessprobe
# vsphere_csi_liveness_probe_image_tag: "v2.6.0"
## registry.k8s.io/sig-storage/csi-provisioner
# vsphere_csi_provisioner_image_tag: "v3.1.0"
## registry.k8s.io/sig-storage/csi-resizer
## makes sense only for vSphere version >=7.0
# vsphere_csi_resizer_tag: "v1.3.0"
## To use vSphere CSI plugin to provision volumes set this value to true
# vsphere_csi_enabled: true
# vsphere_csi_controller_replicas: 1

View File

@@ -1,35 +0,0 @@
---
## Etcd auto compaction retention for mvcc key value store in hour
# etcd_compaction_retention: 0
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
# etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
## This value is only relevant when deploying etcd with `etcd_deployment_type: docker`
# etcd_memory_limit: "512M"
## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
## etcd documentation for more information.
# 8G is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
# etcd_quota_backend_bytes: "2147483648"
# Maximum client request size in bytes the server will accept.
# etcd is designed to handle small key value pairs typical for metadata.
# Larger requests will work, but may increase the latency of other requests
# etcd_max_request_bytes: "1572864"
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
# etcd_peer_client_auth: true
## Enable distributed tracing
## To enable this experimental feature, set the etcd_experimental_enable_distributed_tracing: true, along with the
## etcd_experimental_distributed_tracing_sample_rate to choose how many samples to collect per million spans,
## the default sampling rate is 0 https://etcd.io/docs/v3.5/op-guide/monitoring/#distributed-tracing
# etcd_experimental_enable_distributed_tracing: false
# etcd_experimental_distributed_tracing_sample_rate: 100
# etcd_experimental_distributed_tracing_address: "localhost:4317"
# etcd_experimental_distributed_tracing_service_name: etcd

View File

@@ -4,7 +4,7 @@
# dashboard_enabled: false
# Helm deployment
helm_enabled: false
helm_enabled: true
# Registry deployment
registry_enabled: false
@@ -13,13 +13,13 @@ registry_enabled: false
# registry_disk_size: "10Gi"
# Metrics Server deployment
metrics_server_enabled: false
# metrics_server_container_port: 10250
# metrics_server_kubelet_insecure_tls: true
# metrics_server_metric_resolution: 15s
# metrics_server_kubelet_preferred_address_types: "InternalIP,ExternalIP,Hostname"
# metrics_server_host_network: false
# metrics_server_replicas: 1
metrics_server_enabled: true
metrics_server_container_port: 10250
metrics_server_kubelet_insecure_tls: true
metrics_server_metric_resolution: 15s
metrics_server_kubelet_preferred_address_types: "InternalIP,ExternalIP,Hostname"
metrics_server_host_network: false
metrics_server_replicas: 1
# Rancher Local Path Provisioner
local_path_provisioner_enabled: false
@@ -96,34 +96,43 @@ rbd_provisioner_enabled: false
# rbd_provisioner_storage_class: rbd
# rbd_provisioner_reclaim_policy: Delete
# Gateway API CRDs
gateway_api_enabled: false
# gateway_api_experimental_channel: false
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
# ingress_nginx_service_type: LoadBalancer
# ingress_nginx_service_nodeport_http: 30080
# ingress_nginx_service_nodeport_https: 30081
ingress_nginx_enabled: true
ingress_nginx_host_network: false
ingress_nginx_service_type: NodePort
ingress_nginx_service_nodeport_http: 30080
ingress_nginx_service_nodeport_https: 30081
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
# map-hash-bucket-size: "128"
# ssl-protocols: "TLSv1.2 TLSv1.3"
ingress_nginx_nodeselector:
- kubernetes.io/hostname: "k8s-worker-01"
- kubernetes.io/hostname: "k8s-worker-02"
- kubernetes.io/hostname: "k8s-worker-03"
ingress_nginx_tolerations:
- key: "node-role.kubernetes.io/control-node"
operator: "Equal"
value: ""
effect: "NoSchedule"
ingress_nginx_namespace: "ingress-nginx"
ingress_nginx_insecure_port: 80
ingress_nginx_secure_port: 443
ingress_nginx_configmap:
map-hash-bucket-size: "128"
ssl-protocols: "TLSv1.2 TLSv1.3"
client-body-buffer-size: "50m"
proxy-body-size: "100m"
client-header-buffer-size: "2k"
# ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
# 53: "kube-system/coredns:53"
# ingress_nginx_extra_args:
# - --default-ssl-certificate=default/foo-tls
# ingress_nginx_termination_grace_period_seconds: 300
# ingress_nginx_class: nginx
ingress_nginx_termination_grace_period_seconds: 300
ingress_nginx_class: nginx
# ingress_nginx_without_class: true
# ingress_nginx_default: false
@@ -136,23 +145,23 @@ ingress_alb_enabled: false
# alb_ingress_aws_debug: "false"
# Cert manager deployment
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"
# cert_manager_tolerations:
# - key: node-role.kubernetes.io/control-plane
# effect: NoSchedule
# cert_manager_affinity:
# nodeAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 100
# preference:
# matchExpressions:
# - key: node-role.kubernetes.io/control-plane
# operator: In
# values:
# - ""
# cert_manager_nodeselector:
# kubernetes.io/os: "linux"
cert_manager_enabled: true
cert_manager_namespace: "cert-manager"
cert_manager_tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
cert_manager_affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: In
values:
- ""
cert_manager_nodeselector:
kubernetes.io/os: "linux"
# cert_manager_trusted_internal_ca: |
# -----BEGIN CERTIFICATE-----
@@ -249,7 +258,7 @@ argocd_enabled: false
# argocd_admin_password: "password"
# The plugin manager for kubectl
krew_enabled: false
krew_enabled: true
krew_root_dir: "/usr/local/krew"
# Kube VIP

View File

@@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.29.10
kube_version: v1.30.4
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
@@ -73,12 +73,12 @@ kube_network_plugin: calico
kube_network_plugin_multus: false
# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
kube_service_addresses: 172.24.0.0/18
# internal network. When used, it will assign IP
# addresses from this range to individual pods.
# This network must be unused in your network infrastructure!
kube_pods_subnet: 10.233.64.0/18
kube_pods_subnet: 172.24.64.0/18
# internal network node size allocation (optional). This is the size allocated
# to each node for pod IP address allocation. Note that the number of pods per node is
@@ -157,7 +157,7 @@ kube_encrypt_secret_data: false
# DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
cluster_name: k8s.prod.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# dns_timeout: 2
@@ -169,7 +169,7 @@ ndots: 2
# Remove default cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
# remove_default_searchdomains: false
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
dns_mode: coredns_dual
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
@@ -180,26 +180,18 @@ nodelocaldns_health_port: 9254
nodelocaldns_second_health_port: 9256
nodelocaldns_bind_metrics_host_ip: false
nodelocaldns_secondary_skew_seconds: 5
# nodelocaldns_external_zones:
# - zones:
# - example.com
# - example.io:1053
# nameservers:
# - 1.1.1.1
# - 2.2.2.2
# cache: 5
# - zones:
# - https://mycompany.local:4453
# nameservers:
# - 192.168.0.53
# cache: 0
# - zones:
# - mydomain.tld
# nameservers:
# - 10.233.0.3
# cache: 5
# rewrite:
# - name website.tld website.namespace.svc.cluster.local
nodelocaldns_external_zones:
- zones:
- avroid.tech
- avroid.team
- avroid.cloud
- adlinux.store
- o2linux.org
nameservers:
- 10.2.4.10
- 10.2.4.20
- 10.3.0.101
cache: 5
# Enable k8s_external plugin for CoreDNS
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
@@ -214,10 +206,23 @@ enable_coredns_k8s_endpoint_pod_names: false
# Forward extra domains to the coredns kubernetes plugin
# coredns_kubernetes_extra_domains: ''
coredns_external_zones:
- zones:
- avroid.tech
- avroid.team
- avroid.cloud
- adlinux.store
- o2linux.org
nameservers:
- 10.2.4.10
- 10.2.4.20
- 10.3.0.101
cache: 5
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: host_resolvconf
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
deploy_netchecker: true
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(3) | ansible.utils.ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(4) | ansible.utils.ipaddr('address') }}"
@@ -248,7 +253,7 @@ default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# Use ansible_host as external api ip when copying over kubeconfig.
# kubeconfig_localhost_ansible_host: false
# Download kubectl onto the host that runs Ansible in {{ bin_dir }}
# kubectl_localhost: false
kubectl_localhost: false
# A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
# Acceptable options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
@@ -263,34 +268,34 @@ default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# kubelet_kubelet_cgroups_cgroupfs: "/system.slice/kubelet.service"
# Optionally reserve this space for kube daemons.
# kube_reserved: false
kube_reserved: true
## Uncomment to override default values
## The following two items need to be set when kube_reserved is true
# kube_reserved_cgroups_for_service_slice: kube.slice
# kube_reserved_cgroups: "/{{ kube_reserved_cgroups_for_service_slice }}"
# kube_memory_reserved: 256Mi
# kube_cpu_reserved: 100m
# kube_ephemeral_storage_reserved: 2Gi
kube_reserved_cgroups_for_service_slice: kube.slice
kube_reserved_cgroups: "/{{ kube_reserved_cgroups_for_service_slice }}"
kube_memory_reserved: 256Mi
kube_cpu_reserved: 100m
kube_ephemeral_storage_reserved: 2Gi
# kube_pid_reserved: "1000"
# Reservation for master hosts
# kube_master_memory_reserved: 512Mi
# kube_master_cpu_reserved: 200m
# kube_master_ephemeral_storage_reserved: 2Gi
kube_master_memory_reserved: 512Mi
kube_master_cpu_reserved: 200m
kube_master_ephemeral_storage_reserved: 2Gi
# kube_master_pid_reserved: "1000"
## Optionally reserve resources for OS system daemons.
# system_reserved: true
system_reserved: true
## Uncomment to override default values
## The following two items need to be set when system_reserved is true
# system_reserved_cgroups_for_service_slice: system.slice
# system_reserved_cgroups: "/{{ system_reserved_cgroups_for_service_slice }}"
# system_memory_reserved: 512Mi
# system_cpu_reserved: 500m
# system_ephemeral_storage_reserved: 2Gi
system_reserved_cgroups_for_service_slice: system.slice
system_reserved_cgroups: "/{{ system_reserved_cgroups_for_service_slice }}"
system_memory_reserved: 512Mi
system_cpu_reserved: 500m
system_ephemeral_storage_reserved: 2Gi
## Reservation for master hosts
# system_master_memory_reserved: 256Mi
# system_master_cpu_reserved: 250m
# system_master_ephemeral_storage_reserved: 2Gi
system_master_memory_reserved: 256Mi
system_master_cpu_reserved: 250m
system_master_ephemeral_storage_reserved: 2Gi
## Eviction Thresholds to avoid system OOMs
# https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds
@@ -331,32 +336,15 @@ persistent_volumes_enabled: false
# nvidia_gpu_device_plugin_container: "registry.k8s.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e"
## Support tls min version, Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
# tls_min_version: ""
tls_min_version: "VersionTLS12"
## Support tls cipher suites.
# tls_cipher_suites: {}
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
# - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
# - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
# - TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
# - TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
# - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
# - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
# - TLS_ECDHE_RSA_WITH_RC4_128_SHA
# - TLS_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_RSA_WITH_AES_128_CBC_SHA
# - TLS_RSA_WITH_AES_128_CBC_SHA256
# - TLS_RSA_WITH_AES_128_GCM_SHA256
# - TLS_RSA_WITH_AES_256_CBC_SHA
# - TLS_RSA_WITH_AES_256_GCM_SHA384
# - TLS_RSA_WITH_RC4_128_SHA
tls_cipher_suites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
## Amount of time to retain events. (default 1h0m0s)
event_ttl_duration: "1h0m0s"

View File

@@ -19,7 +19,7 @@ calico_cni_name: k8s-pod-network
# add default ippool name
# calico_pool_name: "default-pool"
# add default ippool blockSize
# add default ippool blockSize (defaults kube_network_node_prefix)
calico_pool_blocksize: 26
# add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise)

View File

@@ -1,271 +0,0 @@
---
# cilium_version: "v1.15.4"
# Log-level
# cilium_debug: false
# cilium_mtu: ""
# cilium_enable_ipv4: true
# cilium_enable_ipv6: false
# Enable l2 announcement from cilium to replace Metallb Ref: https://docs.cilium.io/en/v1.14/network/l2-announcements/
cilium_l2announcements: false
# Cilium agent health port
# cilium_agent_health_port: "9879"
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
# These can be queried with:
# `kubectl get ciliumid`
# - "kvstore" stores identities in an etcd kvstore.
# - In order to support External Workloads, "crd" is required
# - Ref: https://docs.cilium.io/en/stable/gettingstarted/external-workloads/#setting-up-support-for-external-workloads-beta
# - KVStore operations are only required when cilium-operator is running with any of the below options:
# - --synchronize-k8s-services
# - --synchronize-k8s-nodes
# - --identity-allocation-mode=kvstore
# - Ref: https://docs.cilium.io/en/stable/internals/cilium_operator/#kvstore-operations
# cilium_identity_allocation_mode: kvstore
# Etcd SSL dirs
# cilium_cert_dir: /etc/cilium/certs
# kube_etcd_cacert_file: ca.pem
# kube_etcd_cert_file: cert.pem
# kube_etcd_key_file: cert-key.pem
# Limits for apps
# cilium_memory_limit: 500M
# cilium_cpu_limit: 500m
# cilium_memory_requests: 64M
# cilium_cpu_requests: 100m
# Overlay Network Mode
# cilium_tunnel_mode: vxlan
# LoadBalancer Mode (snat/dsr/hybrid) Ref: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/#dsr-mode
# cilium_loadbalancer_mode: snat
# Optional features
# cilium_enable_prometheus: false
# Enable if you want to make use of hostPort mappings
# cilium_enable_portmap: false
# Monitor aggregation level (none/low/medium/maximum)
# cilium_monitor_aggregation: medium
# The monitor aggregation flags determine which TCP flags which, upon the
# first observation, cause monitor notifications to be generated.
#
# Only effective when monitor aggregation is set to "medium" or higher.
# cilium_monitor_aggregation_flags: "all"
# Kube Proxy Replacement mode (strict/partial)
# cilium_kube_proxy_replacement: partial
# If upgrading from Cilium < 1.5, you may want to override some of these options
# to prevent service disruptions. See also:
# http://docs.cilium.io/en/stable/install/upgrade/#changes-that-may-require-action
# cilium_preallocate_bpf_maps: false
# `cilium_tofqdns_enable_poller` is deprecated in 1.8, removed in 1.9
# cilium_tofqdns_enable_poller: false
# `cilium_enable_legacy_services` is deprecated in 1.6, removed in 1.9
# cilium_enable_legacy_services: false
# Unique ID of the cluster. Must be unique across all connected clusters and
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
# This value is not defined by default
# cilium_cluster_id:
# Deploy cilium even if kube_network_plugin is not cilium.
# This enables to deploy cilium alongside another CNI to replace kube-proxy.
# cilium_deploy_additionally: false
# Auto direct nodes routes can be used to advertise pods routes in your cluster
# without any tunneling (with `cilium_tunnel_mode` sets to `disabled`).
# This works only if you have a L2 connectivity between all your nodes.
# You wil also have to specify the variable `cilium_native_routing_cidr` to
# make this work. Please refer to the cilium documentation for more
# information about this kind of setups.
# cilium_auto_direct_node_routes: false
# Allows to explicitly specify the IPv4 CIDR for native routing.
# When specified, Cilium assumes networking for this CIDR is preconfigured and
# hands traffic destined for that range to the Linux network stack without
# applying any SNAT.
# Generally speaking, specifying a native routing CIDR implies that Cilium can
# depend on the underlying networking stack to route packets to their
# destination. To offer a concrete example, if Cilium is configured to use
# direct routing and the Kubernetes CIDR is included in the native routing CIDR,
# the user must configure the routes to reach pods, either manually or by
# setting the auto-direct-node-routes flag.
# cilium_native_routing_cidr: ""
# Allows to explicitly specify the IPv6 CIDR for native routing.
# cilium_native_routing_cidr_ipv6: ""
# Enable transparent network encryption.
# cilium_encryption_enabled: false
# Encryption method. Can be either ipsec or wireguard.
# Only effective when `cilium_encryption_enabled` is set to true.
# cilium_encryption_type: "ipsec"
# Enable encryption for pure node to node traffic.
# This option is only effective when `cilium_encryption_type` is set to `ipsec`.
# cilium_ipsec_node_encryption: false
# If your kernel or distribution does not support WireGuard, Cilium agent can be configured to fall back on the user-space implementation.
# When this flag is enabled and Cilium detects that the kernel has no native support for WireGuard,
# it will fallback on the wireguard-go user-space implementation of WireGuard.
# This option is only effective when `cilium_encryption_type` is set to `wireguard`.
# cilium_wireguard_userspace_fallback: false
# IP Masquerade Agent
# https://docs.cilium.io/en/stable/concepts/networking/masquerading/
# By default, all packets from a pod destined to an IP address outside of the cilium_native_routing_cidr range are masqueraded
# cilium_ip_masq_agent_enable: false
### A packet sent from a pod to a destination which belongs to any CIDR from the nonMasqueradeCIDRs is not going to be masqueraded
# cilium_non_masquerade_cidrs:
# - 10.0.0.0/8
# - 172.16.0.0/12
# - 192.168.0.0/16
# - 100.64.0.0/10
# - 192.0.0.0/24
# - 192.0.2.0/24
# - 192.88.99.0/24
# - 198.18.0.0/15
# - 198.51.100.0/24
# - 203.0.113.0/24
# - 240.0.0.0/4
### Indicates whether to masquerade traffic to the link local prefix.
### If the masqLinkLocal is not set or set to false, then 169.254.0.0/16 is appended to the non-masquerade CIDRs list.
# cilium_masq_link_local: false
### A time interval at which the agent attempts to reload config from disk
# cilium_ip_masq_resync_interval: 60s
# Hubble
### Enable Hubble without install
# cilium_enable_hubble: false
### Enable Hubble Metrics
# cilium_enable_hubble_metrics: false
### if cilium_enable_hubble_metrics: true
# cilium_hubble_metrics: {}
# - dns
# - drop
# - tcp
# - flow
# - icmp
# - http
### Enable Hubble install
# cilium_hubble_install: false
### Enable auto generate certs if cilium_hubble_install: true
# cilium_hubble_tls_generate: false
# IP address management mode for v1.9+.
# https://docs.cilium.io/en/v1.9/concepts/networking/ipam/
# cilium_ipam_mode: kubernetes
# Extra arguments for the Cilium agent
# cilium_agent_custom_args: []
# For adding and mounting extra volumes to the cilium agent
# cilium_agent_extra_volumes: []
# cilium_agent_extra_volume_mounts: []
# cilium_agent_extra_env_vars: []
# cilium_operator_replicas: 2
# The address at which the cillium operator bind health check api
# cilium_operator_api_serve_addr: "127.0.0.1:9234"
## A dictionary of extra config variables to add to cilium-config, formatted like:
## cilium_config_extra_vars:
## var1: "value1"
## var2: "value2"
# cilium_config_extra_vars: {}
# For adding and mounting extra volumes to the cilium operator
# cilium_operator_extra_volumes: []
# cilium_operator_extra_volume_mounts: []
# Extra arguments for the Cilium Operator
# cilium_operator_custom_args: []
# Name of the cluster. Only relevant when building a mesh of clusters.
# cilium_cluster_name: default
# Make Cilium take ownership over the `/etc/cni/net.d` directory on the node, renaming all non-Cilium CNI configurations to `*.cilium_bak`.
# This ensures no Pods can be scheduled using other CNI plugins during Cilium agent downtime.
# Available for Cilium v1.10 and up.
# cilium_cni_exclusive: true
# Configure the log file for CNI logging with retention policy of 7 days.
# Disable CNI file logging by setting this field to empty explicitly.
# Available for Cilium v1.12 and up.
# cilium_cni_log_file: "/var/run/cilium/cilium-cni.log"
# -- Configure cgroup related configuration
# -- Enable auto mount of cgroup2 filesystem.
# When `cilium_cgroup_auto_mount` is enabled, cgroup2 filesystem is mounted at
# `cilium_cgroup_host_root` path on the underlying host and inside the cilium agent pod.
# If users disable `cilium_cgroup_auto_mount`, it's expected that users have mounted
# cgroup2 filesystem at the specified `cilium_cgroup_auto_mount` volume, and then the
# volume will be mounted inside the cilium agent pod at the same path.
# Available for Cilium v1.11 and up
# cilium_cgroup_auto_mount: true
# -- Configure cgroup root where cgroup2 filesystem is mounted on the host
# cilium_cgroup_host_root: "/run/cilium/cgroupv2"
# Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
# cilium_bpf_map_dynamic_size_ratio: "0.0"
# -- Enables masquerading of IPv4 traffic leaving the node from endpoints.
# Available for Cilium v1.10 and up
# cilium_enable_ipv4_masquerade: true
# -- Enables masquerading of IPv6 traffic leaving the node from endpoints.
# Available for Cilium v1.10 and up
# cilium_enable_ipv6_masquerade: true
# -- Enable native IP masquerade support in eBPF
# cilium_enable_bpf_masquerade: false
# -- Configure whether direct routing mode should route traffic via
# host stack (true) or directly and more efficiently out of BPF (false) if
# the kernel supports it. The latter has the implication that it will also
# bypass netfilter in the host namespace.
# cilium_enable_host_legacy_routing: true
# -- Enable use of the remote node identity.
# ref: https://docs.cilium.io/en/v1.7/install/upgrade/#configmap-remote-node-identity
# cilium_enable_remote_node_identity: true
# -- Enable the use of well-known identities.
# cilium_enable_well_known_identities: false
# cilium_enable_bpf_clock_probe: true
# -- Whether to enable CNP status updates.
# cilium_disable_cnp_status_updates: true
# A list of extra rules variables to add to clusterrole for cilium operator, formatted like:
# cilium_clusterrole_rules_operator_extra_vars:
# - apiGroups:
# - '""'
# resources:
# - pods
# verbs:
# - delete
# - apiGroups:
# - '""'
# resources:
# - nodes
# verbs:
# - list
# - watch
# resourceNames:
# - toto
# cilium_clusterrole_rules_operator_extra_vars: []

View File

@@ -1,51 +0,0 @@
---
# custom_cni network plugin configuration
# There are two deployment options to choose from, select one
## OPTION 1 - Static manifest files
## With this option, referred manifest file will be deployed
## as if the `kubectl apply -f` method was used with it.
#
## List of Kubernetes resource manifest files
## See tests/files/custom_cni/README.md for example
# custom_cni_manifests: []
## OPTION 1 EXAMPLE - Cilium static manifests in Kubespray tree
# custom_cni_manifests:
# - "{{ playbook_dir }}/../tests/files/custom_cni/cilium.yaml"
## OPTION 2 - Helm chart application
## This allows the CNI backend to be deployed to Kubespray cluster
## as common Helm application.
#
## Helm release name - how the local instance of deployed chart will be named
# custom_cni_chart_release_name: ""
#
## Kubernetes namespace to deploy into
# custom_cni_chart_namespace: "kube-system"
#
## Helm repository name - how the local record of Helm repository will be named
# custom_cni_chart_repository_name: ""
#
## Helm repository URL
# custom_cni_chart_repository_url: ""
#
## Helm chart reference - path to the chart in the repository
# custom_cni_chart_ref: ""
#
## Helm chart version
# custom_cni_chart_version: ""
#
## Custom Helm values to be used for deployment
# custom_cni_chart_values: {}
## OPTION 2 EXAMPLE - Cilium deployed from official public Helm chart
# custom_cni_chart_namespace: kube-system
# custom_cni_chart_release_name: cilium
# custom_cni_chart_repository_name: cilium
# custom_cni_chart_repository_url: https://helm.cilium.io
# custom_cni_chart_ref: cilium/cilium
# custom_cni_chart_version: 1.14.3
# custom_cni_chart_values:
# cluster:
# name: "cilium-demo"

View File

@@ -1,18 +0,0 @@
# see roles/network_plugin/flannel/defaults/main.yml
## interface that should be used for flannel operations
## This is actually an inventory cluster-level item
# flannel_interface:
## Select interface that should be used for flannel operations by regexp on Name or IP
## This is actually an inventory cluster-level item
## example: select interface with ip from net 10.0.0.0/23
## single quote and escape backslashes
# flannel_interface_regexp: '10\\.0\\.[0-2]\\.\\d{1,3}'
# You can choose what type of flannel backend to use: 'vxlan', 'host-gw' or 'wireguard'
# please refer to flannel's docs : https://github.com/coreos/flannel/blob/master/README.md
# flannel_backend_type: "vxlan"
# flannel_vxlan_vni: 1
# flannel_vxlan_port: 8472
# flannel_vxlan_direct_routing: false

View File

@@ -1,63 +0,0 @@
---
# geneve or vlan
kube_ovn_network_type: geneve
# geneve, vxlan or stt. ATTENTION: some networkpolicy cannot take effect when using vxlan and stt need custom compile ovs kernel module
kube_ovn_tunnel_type: geneve
## The nic to support container network can be a nic name or a group of regex separated by comma e.g: 'enp6s0f0,eth.*', if empty will use the nic that the default route use.
# kube_ovn_iface: eth1
## The MTU used by pod iface in overlay networks (default iface MTU - 100)
# kube_ovn_mtu: 1333
## Enable hw-offload, disable traffic mirror and set the iface to the physical port. Make sure that there is an IP address bind to the physical port.
kube_ovn_hw_offload: false
# traffic mirror
kube_ovn_traffic_mirror: false
# kube_ovn_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
# kube_ovn_default_interface_name: eth0
kube_ovn_external_address: 8.8.8.8
kube_ovn_external_address_ipv6: 2400:3200::1
kube_ovn_external_dns: alauda.cn
# kube_ovn_default_gateway: 10.233.64.1,fd85:ee78:d8a6:8607::1:0
kube_ovn_default_gateway_check: true
kube_ovn_default_logical_gateway: false
# kube_ovn_default_exclude_ips: 10.16.0.1
kube_ovn_node_switch_cidr: 100.64.0.0/16
kube_ovn_node_switch_cidr_ipv6: fd00:100:64::/64
## vlan config, set default interface name and vlan id
# kube_ovn_default_interface_name: eth0
kube_ovn_default_vlan_id: 100
kube_ovn_vlan_name: product
## pod nic type, support: veth-pair or internal-port
kube_ovn_pod_nic_type: veth_pair
## Enable load balancer
kube_ovn_enable_lb: true
## Enable network policy support
kube_ovn_enable_np: true
## Enable external vpc support
kube_ovn_enable_external_vpc: true
## Enable checksum
kube_ovn_encap_checksum: true
## enable ssl
kube_ovn_enable_ssl: false
## dpdk
kube_ovn_dpdk_enabled: false
## enable interconnection to an existing IC database server.
kube_ovn_ic_enable: false
kube_ovn_ic_autoroute: true
kube_ovn_ic_dbhost: "127.0.0.1"
kube_ovn_ic_zone: "kubernetes"

View File

@@ -1,73 +0,0 @@
# See roles/network_plugin/kube-router/defaults/main.yml
# Kube router version
# Default to v2
# kube_router_version: "v2.0.0"
# Uncomment to use v1 (Deprecated)
# kube_router_version: "v1.6.0"
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
# kube_router_run_router: true
# Enables Network Policy -- sets up iptables to provide ingress firewall for pods
# kube_router_run_firewall: true
# Enables Service Proxy -- sets up IPVS for Kubernetes Services
# see docs/kube-router.md "Caveats" section
# kube_router_run_service_proxy: false
# Add Cluster IP of the service to the RIB so that it gets advertises to the BGP peers.
# kube_router_advertise_cluster_ip: false
# Add External IP of service to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_external_ip: false
# Add LoadBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_loadbalancer_ip: false
# Enables BGP graceful restarts
# kube_router_bgp_graceful_restart: true
# Adjust manifest of kube-router daemonset template with DSR needed changes
# kube_router_enable_dsr: false
# Array of arbitrary extra arguments to kube-router, see
# https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md
# kube_router_extra_args: []
# ASN number of the cluster, used when communicating with external BGP routers
# kube_router_cluster_asn: ~
# ASN numbers of the BGP peer to which cluster nodes will advertise cluster ip and node's pod cidr.
# kube_router_peer_router_asns: ~
# The ip address of the external router to which all nodes will peer and advertise the cluster ip and pod cidr's.
# kube_router_peer_router_ips: ~
# The remote port of the external BGP to which all nodes will peer. If not set, default BGP port (179) will be used.
# kube_router_peer_router_ports: ~
# Setups node CNI to allow hairpin mode, requires node reboots, see
# https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md#hairpin-mode
# kube_router_support_hairpin_mode: false
# Select DNS Policy ClusterFirstWithHostNet, ClusterFirst, etc.
# kube_router_dns_policy: ClusterFirstWithHostNet
# Array of annotations for master
# kube_router_annotations_master: []
# Array of annotations for every node
# kube_router_annotations_node: []
# Array of common annotations for every node
# kube_router_annotations_all: []
# Enables scraping kube-router metrics with Prometheus
# kube_router_enable_metrics: false
# Path to serve Prometheus metrics on
# kube_router_metrics_path: /metrics
# Prometheus metrics port to use
# kube_router_metrics_port: 9255

View File

@@ -1,6 +0,0 @@
---
# private interface, on a l2-network
macvlan_interface: "eth1"
# Enable nat in default gateway network interface
enable_nat_default_gateway: true

View File

@@ -1,64 +0,0 @@
# see roles/network_plugin/weave/defaults/main.yml
# Weave's network password for encryption, if null then no network encryption.
# weave_password: ~
# If set to 1, disable checking for new Weave Net versions (default is blank,
# i.e. check is enabled)
# weave_checkpoint_disable: false
# Soft limit on the number of connections between peers. Defaults to 100.
# weave_conn_limit: 100
# Weave Net defaults to enabling hairpin on the bridge side of the veth pair
# for containers attached. If you need to disable hairpin, e.g. your kernel is
# one of those that can panic if hairpin is enabled, then you can disable it by
# setting `HAIRPIN_MODE=false`.
# weave_hairpin_mode: true
# The range of IP addresses used by Weave Net and the subnet they are placed in
# (CIDR format; default 10.32.0.0/12)
# weave_ipalloc_range: "{{ kube_pods_subnet }}"
# Set to 0 to disable Network Policy Controller (default is on)
# weave_expect_npc: "{{ enable_network_policy }}"
# List of addresses of peers in the Kubernetes cluster (default is to fetch the
# list from the api-server)
# weave_kube_peers: ~
# Set the initialization mode of the IP Address Manager (defaults to consensus
# amongst the KUBE_PEERS)
# weave_ipalloc_init: ~
# Set the IP address used as a gateway from the Weave network to the host
# network - this is useful if you are configuring the addon as a static pod.
# weave_expose_ip: ~
# Address and port that the Weave Net daemon will serve Prometheus-style
# metrics on (defaults to 0.0.0.0:6782)
# weave_metrics_addr: ~
# Address and port that the Weave Net daemon will serve status requests on
# (defaults to disabled)
# weave_status_addr: ~
# Weave Net defaults to 1376 bytes, but you can set a smaller size if your
# underlying network has a tighter limit, or set a larger size for better
# performance if your network supports jumbo frames (e.g. 8916)
# weave_mtu: 1376
# Set to 1 to preserve the client source IP address when accessing Service
# annotated with `service.spec.externalTrafficPolicy=Local`. The feature works
# only with Weave IPAM (default).
# weave_no_masq_local: true
# set to nft to use nftables backend for iptables (default is iptables)
# weave_iptables_backend: iptables
# Extra variables that passing to launch.sh, useful for enabling seed mode, see
# https://www.weave.works/docs/net/latest/tasks/ipam/ipam/
# weave_extra_args: ~
# Extra variables for weave_npc that passing to launch.sh, useful for change log level, ex --log-level=error
# weave_npc_extra_args: ~

View File

@@ -2,37 +2,49 @@
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6
k8s-control-01 ansible_host=k8s-control-01.avroid.tech ip=10.2.20.31 etcd_member_name=etcd1
k8s-control-02 ansible_host=k8s-control-02.avroid.tech ip=10.2.20.32 etcd_member_name=etcd2
k8s-control-03 ansible_host=k8s-control-03.avroid.tech ip=10.2.20.33 etcd_member_name=etcd3
k8s-worker-01 ansible_host=k8s-worker-01.avroid.tech
k8s-worker-02 ansible_host=k8s-worker-02.avroid.tech
k8s-worker-03 ansible_host=k8s-worker-03.avroid.tech
k8s-build-01 ansible_host=k8s-build-01.avroid.tech
k8s-build-02 ansible_host=k8s-build-02.avroid.tech
k8s-build-03 ansible_host=k8s-build-03.avroid.tech
k8s-build-04 ansible_host=k8s-build-04.avroid.tech
k8s-build-05 ansible_host=k8s-build-05.avroid.tech
k8s-build-06 ansible_host=k8s-build-06.avroid.tech
k8s-build-07 ansible_host=k8s-build-07.avroid.tech
# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube_control_plane]
# node1
# node2
# node3
k8s-control-01
k8s-control-02
k8s-control-03
[etcd]
# node1
# node2
# node3
k8s-control-01
k8s-control-02
k8s-control-03
[kube_node]
# node2
# node3
# node4
# node5
# node6
k8s-worker-01
k8s-worker-02
k8s-worker-03
k8s-build-01
k8s-build-02
k8s-build-03
k8s-build-04
k8s-build-05
k8s-build-06
k8s-build-07
[calico_rr]
#[calico_rr]
[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr
#calico_rr