Compare commits

...

10 Commits

Author SHA1 Message Date
Rustam Tagaev
bc6ef3d7a0 [DO-1716] set version for valkey and increase limits (!148)
[DO-1716]

Co-authored-by: Rustam Tagaev <rustam.tagaev@avroid.tech>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/148
2025-03-18 17:26:48 +03:00
Rustam Tagaev
960ad658a1 [DO-1716] add valkey for tavro-cloud-test (!147)
Co-authored-by: Rustam Tagaev <rustam.tagaev@avroid.tech>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/147
2025-03-18 17:07:42 +03:00
Dmitrij Prokov
2034b5f40f [DO-1493] Up cpu requests (!146)
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/146
Reviewed-by: Rustam Tagaev <rustam.tagaev@avroid.team>
Co-authored-by: Dmitrij Prokov <dmitrij.prokov@avroid.team>
Co-committed-by: Dmitrij Prokov <dmitrij.prokov@avroid.team>
2025-03-17 16:54:02 +03:00
Dmitrij Prokov
d421a147d4 [DO-1493] Up cpu limits (!145)
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/145
Reviewed-by: Rustam Tagaev <rustam.tagaev@avroid.team>
2025-03-17 14:51:38 +03:00
Denis Patrakeev
f58e21de72 [DO-1690] Add docs (!143)
[DO-1690]

Co-authored-by: denis.patrakeev <denis.patrakeev@avroid.tech>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/143
Reviewed-by: Rustam Tagaev <rustam.tagaev@avroid.team>
2025-03-14 15:18:29 +03:00
Boris Shestov
23896dc0f7 [hotfix] Change config (!142)
[hotfix] Change config

Co-authored-by: Boris Shestov <shestov1989@mail.ru>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/142
Reviewed-by: Denis Patrakeev <denis.patrakeev@avroid.team>
2025-03-13 18:46:31 +03:00
Boris Shestov
fec18d2ace [hotfix] Fix variable (!140)
[hotfix] Fix variable

Co-authored-by: Boris Shestov <shestov1989@mail.ru>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/140
Reviewed-by: Denis Patrakeev <denis.patrakeev@avroid.team>
2025-03-13 16:00:48 +03:00
Boris Shestov
b679668a2e [hotfix] Allow k8s repo (!139)
[hotfix] Allow k8s repo

Co-authored-by: Boris Shestov <shestov1989@mail.ru>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/139
Reviewed-by: Denis Patrakeev <denis.patrakeev@avroid.team>
2025-03-13 12:42:29 +03:00
Boris Shestov
62d13eb792 [hotfix] Update version (!138)
[hotfix] Update version

Co-authored-by: Boris Shestov <shestov1989@mail.ru>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/138
Reviewed-by: Denis Patrakeev <denis.patrakeev@avroid.team>
2025-03-13 12:31:09 +03:00
Boris Shestov
1efba5374d [DO-1689] Add deploy with argocd (!135)
[DO-1689] Add deploy with argocd

Co-authored-by: Boris Shestov <shestov1989@mail.ru>
Co-authored-by: Denis Patrakeev <denis.patrakeev@avroid.team>
Co-authored-by: Rustam Tagaev <rustam.tagaev@avroid.team>
Reviewed-on: https://git.avroid.tech/K8s/k8s-configs/pulls/135
Reviewed-by: Denis Patrakeev <denis.patrakeev@avroid.team>
Reviewed-by: Rustam Tagaev <rustam.tagaev@avroid.team>
2025-03-13 11:31:33 +03:00
9 changed files with 295 additions and 10 deletions

View File

@@ -1,6 +1,59 @@
# k8s-configs
# Репозиторий с конфигурациями приложений, которые деплоятся на кластера K8s
## Настройка внешних секретов
## Структура каталогов в репозитории
```bash
├── clusters # каталог содержащий директории с кластерами
│ ├── k8s-avroid-office.prod.local # имя промышленного кластера
│ │ └── namespaces # каталог с директориями namespace, создаваемых в кластере k8s
│ │ ├── argocd # namespace для РУЧНОГО деплоя ArgoCD
│ │ │ ├── argo-cd # ArgoCD
│ │ │ ├── argocd-apps # мета-приложения ArgoCD Bootstrapping
│ │ │ ├── argocd-namespace.yaml # манифест namespace
│ │ │ └── README.md # порядок деплоя ArgoCD
│ │ │
│ │ ├── example # пример namespace
│ │ │ ├── <ПРИЛОЖЕНИЕ/СЕРВИС>
│ │ │ └── example.yaml # манифест namespace
│ │ │
│ │ ├── <NAMESPACE> # namespace для ручного или автоматического деплоя ArgoCD
│ │ │ ├── .rbac # манифесты различных политик или секретов
│ │ │ ├── app1 # приложение 1
│ │ │ ├── app2 # приложение 2
│ │ │ ├── ...
│ │ │ ├── appN # приложение N
│ │ │ └── <NAMESPACE>.yaml # манифест namespace
│ │ │
│ │ ├── kube-prometheus-stack # отдельный namespace cо стеком kube-prometheus-stack для мониторинга всего кластера
│ │ │
│ │ ├── jenkins-builds # отдельный namespace интеграции с промышленным Jenkins
│ │ │
│ │ ├── huawei-csi # отдельный namespace для Huawei CSI для интеграции с СХД
│ │ │
│ │ └── vault-infra # отдельный namespace для Bank Vault для интеграции HashiCorp Vault
│ │
│ ├── ... # имя промышленного кластера
│ │
│ └── <КЛАСТЕР_K8S_N> # имя промышленного кластера
└── README.md # этот файл
```
## ArgoCD. Автоматизация развёртывания сервисов
ArgoCD настроен таким образом, что он смотрит на `master`-ветку этого репозитория и, в зависимости от инстанса,
обрабатывает только файлы попадающие под вот такую маску:
```bash
Обрабатываемые файлы:
clusters/<КЛАСТЕР_K8S>/namespaces/*/argocd-apps-*.yaml
Исключение:
clusters/<КЛАСТЕР_K8S>/namespaces/{argocd/*,example/*,vault-infra/*}
```
## Настройка внешних секретов из HashiCorp Vault с помощью Bank Vaults
[Ссылка на офф. доку](https://bank-vaults.dev)
@@ -61,7 +114,7 @@ kubectl apply -f vault-service-account.yaml
Простой пример для тестирования - в логах вы увидите свой секрет
```yaml
```bash
kubectl apply -f - <<"EOF"
apiVersion: apps/v1
kind: Deployment
@@ -131,7 +184,6 @@ data:
EOH
destination = "/vault/secrets/config.yaml" # тут указан конечный файл конфигурации вашего приложения
}
EOF
---
apiVersion: v1
kind: Secret
@@ -146,6 +198,7 @@ metadata:
type: Opaque
data:
FOO_1: dmF1bHQ6c2FuZGJveC9kYXRhL2s4cy92YXVsdC10ZXN0I0ZPTw==
EOF
```
В секрете строку с адресом секрета Bank Vault необходимо преобразовать в base64, это делается следующим образом:

View File

@@ -19,10 +19,10 @@ metadata:
app.kubernetes.io/managed-by: argocd
spec:
hard:
requests.cpu: "4"
requests.cpu: "8"
requests.memory: "16Gi"
requests.storage: "100Gi"
limits.cpu: "16"
limits.cpu: "22"
limits.memory: 32Gi
configmaps: "200"
resourcequotas: "1"

View File

@@ -0,0 +1,51 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: webhook-receiver
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: webhook-receiver
destination:
server: https://kubernetes.default.svc
namespace: avroid-prod
sources:
- repoURL: https://git.avroid.tech/K8s/k8s-configs.git
targetRevision: master
ref: values
- repoURL: https://nexus.avroid.tech/repository/devops-helm-proxy-helm/
chart: "actual-devops/webhook-receiver"
targetRevision: 0.3.0
helm:
valueFiles:
- $values/clusters/k8s-avroid-office.prod.local/namespaces/avroid-prod/automations-tools/webhook-receiver/values-override.yaml
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- ApplyOutOfSyncOnly=true
- CreateNamespace=true
---
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: webhook-receiver
namespace: argocd
# Finalizer that ensures that project is not deleted until it is not referenced by any application
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
sourceRepos:
- https://git.avroid.tech/K8s/k8s-configs.git
- https://nexus.avroid.tech/repository/devops-helm-proxy-helm/
# Only permit applications to deploy to the guestbook namespace in the same cluster
destinations:
- namespace: avroid-prod
server: https://kubernetes.default.svc
# Deny all cluster-scoped resources from being created, except for Namespace
clusterResourceWhitelist:
- group: ''
kind: Namespace

View File

@@ -0,0 +1,43 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: webhook-receiver-in
namespace: avroid-prod
labels:
app.kubernetes.io/managed-by: argocd
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: webhook-receiver
policyTypes:
- Ingress
ingress:
- ports:
- port: 8081
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: webhook-receiver-out
namespace: avroid-prod
labels:
app.kubernetes.io/managed-by: argocd
spec:
podSelector: {}
policyTypes:
- Egress
ingress: []
egress:
- to:
- ipBlock:
# office-balancer.avroid.tech
cidr: 10.2.16.2/32
ports:
- port: 443
protocol: TCP

View File

@@ -0,0 +1,67 @@
replicaCount: 2
image:
repository: ghcr.io/actual-devops/webhook-receiver
pullPolicy: IfNotPresent
tag: "0.2.0-1"
serviceAccount:
create: false
automount: true
annotations: {}
name: "vault"
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- host: webhook-receiver.avroid.tech
paths:
- path: /
pathType: ImplementationSpecific
tls: []
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
volumeMounts: []
configMap:
configPath: "/vault/secrets/config.yaml"
configTemplate: |
{{ with secret "team-devops/data/services/ci-cd/webhook-receiver" }}
server_port: 8081
jenkins:
url: "https://jenkins.avroid.tech"
user: {{ .Data.data.jenkins_user }}
pass: {{ .Data.data.jenkins_pass }}
token: {{ .Data.data.jenkins_token }}
allowed_webhooks:
- repo_name: 'DevOps/ansible'
run_jobs:
- job_path: 'job/Automation/job/DevOps/job/vault-policies-and-roles-update'
parameterized_job: false
- repo_name: 'DevOps/jenkins-pipelines'
run_jobs:
- job_path: 'job/jobs-dsl/job/jobs-dsl'
parameterized_job: true
{{ end }}
annotations:
vault.security.banzaicloud.io/vault-addr: "https://vault.avroid.tech"
vault.security.banzaicloud.io/vault-role: "avroid-prod"
vault.security.banzaicloud.io/vault-skip-verify: "false"
vault.security.banzaicloud.io/vault-path: "avroid-office"
vault.security.banzaicloud.io/run-as-user: "100"

View File

@@ -1,4 +1,4 @@
# helm upgrade -n tavro-cloud-dev -f values.yaml -i valkey oci://registry-1.docker.io/bitnamicharts/valkey
# helm upgrade -n tavro-cloud-dev -f values.yaml -i valkey oci://registry-1.docker.io/bitnamicharts/valkey --version 2.2.3
image:
tag: 8.0.2-debian-12-r0

View File

@@ -18,12 +18,17 @@ metadata:
app.kubernetes.io/managed-by: manual
spec:
hard:
configmaps: "100"
limits.cpu: "5"
limits.memory: 5Gi
requests.storage: "100Mi"
limits.memory: 13Gi
persistentvolumeclaims: "1"
pods: "100"
requests.cpu: "3"
requests.memory: "2Gi"
requests.memory: "10Gi"
requests.storage: "2Gi"
resourcequotas: "1"
secrets: "100"
services: "100"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy

View File

@@ -0,0 +1,17 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: valkey-in
namespace: tavro-cloud-test
labels:
app.kubernetes.io/managed-by: manual
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: valkey
policyTypes:
- Ingress
ingress:
- ports:
- port: 6379
protocol: TCP

View File

@@ -0,0 +1,49 @@
# helm upgrade -n tavro-cloud-test -f values.yaml -i valkey oci://registry-1.docker.io/bitnamicharts/valkey --version 2.2.3
image:
tag: 8.0.2-debian-12-r0
primary:
service:
type: NodePort
nodePorts:
valkey: 32379
podAnnotations:
vault.security.banzaicloud.io/vault-addr: https://vault.avroid.tech
vault.security.banzaicloud.io/vault-path: avroid-office
vault.security.banzaicloud.io/vault-role: tavro-cloud-test
vault.security.banzaicloud.io/vault-skip-verify: "true"
resources:
limits:
cpu: 300m
memory: 4Gi
requests:
cpu: 100m
memory: 4Gi
persistence:
enabled: false
automountServiceAccountToken: true
customReadinessProbe:
exec:
command:
- valkey-cli
- -a
- "$VALKEY_PASSWORD"
- ping
initialDelaySeconds: 10
periodSeconds: 5
customLivenessProbe:
exec:
command:
- valkey-cli
- -a
- "$VALKEY_PASSWORD"
- ping
initialDelaySeconds: 10
periodSeconds: 5
replica:
replicaCount: 0
auth:
password: "vault:prj-tavro-cloud-services/data/databases/valkey/k8s/avroid-office/ns-tavro-cloud-test/valkey#VALKEY_PASSWD"