initial git commit saving configs

This commit is contained in:
Adrien
2026-03-31 15:30:40 +00:00
commit 7770e9859c
64 changed files with 2866 additions and 0 deletions

124
README.md Normal file
View File

@@ -0,0 +1,124 @@
# Kubernetes Cluster Configuration
A comprehensive Helm-based Kubernetes cluster setup with multiple applications and services organized by function.
## 📁 Project Structure
### Core Infrastructure
#### **Cluster**
- Storage class configuration for persistent volumes
#### **Traefik** (`traefik/`)
- Ingress controller and reverse proxy
- Routes external traffic to internal services
- Helm values configuration included
#### **Shared Database** (`shared-db/`)
- Centralized PostgreSQL database instance
- Shared across multiple applications
- Persistent volume and claim configuration
- NodePort service for external access
### Applications
#### **Bitwarden** (`bitwarden/`)
- Password manager and secrets vault
- Full Helm chart with templates and customizable values
- Persistent storage configuration
#### **Vaultwarden** (`vaultwarden/`)
- Open-source Bitwarden alternative
- Complete Helm chart with deployment templates
- Ingress, service, and persistence configuration
#### **Gitea** (`gitea/`)
- Git hosting service
- Persistent volume and PostgreSQL backed
- Values configuration for customization
#### **Nextcloud** (`nextcloud/`)
- File sync, sharing, and collaboration platform
- Separate persistent volumes for data and PostgreSQL
- Notification push service included
- Custom ingress configuration
#### **Immich** (`immich/`)
- Photo and video backup service
- Sub-chart for PostgreSQL database management
- Master node persistent volume
- PostgreSQL and application storage
#### **Linkwarden Stack** (`linkwarden-stack/`)
- Link management and bookmarking service
- Complete Helm chart with ConfigMap, deployment, and ingress
- Persistent storage configuration
#### **Mumble** (`mumble/`)
- Voice communication and VoIP service
- Helm values for configuration
#### **Letsencrypt** (`letsencrypt/`)
- Automated SSL certificate provisioning
- Integrations with ingress controllers
### Observability & Monitoring
#### **Observability Stack** (`observability/`)
##### **Prometheus** (`observability/prometheus/`)
- Metrics collection and time-series database
- Custom storage class for performance
- Persistent volume configuration
##### **Loki** (`observability/loki/`)
- Log aggregation system
- Companion to Prometheus
- Dedicated storage configuration
##### **Grafana** (`observability/grafana/`)
- Metrics and logs visualization
- Loki backend for log exploration
- Dashboard and alerting capabilities
##### **Alloy** (`observability/alloy/`)
- Telemetry collection agent
- Data collection for Prometheus and Loki
## 🚀 Deployment
Each service is configured as a Helm chart with:
- `values.yaml` - Configuration and customization
- `Chart.yaml` - Chart metadata (where applicable)
- `templates/` - Kubernetes resource templates
- Persistent volume (PV) and persistent volume claim (PVC) for stateful services
### Quick Start
```bash
# Add Helm repositories as needed
helm repo add <repo-name> <repo-url>
helm repo update
# Deploy a service
helm install <release-name> <chart-path> -f <chart-path>/values.yaml -n <namespace>
```
## 📝 Storage Configuration
All persistent services include:
- **pv-\*.yaml** - PersistentVolume definitions
- **pvc-\*.yaml** - PersistentVolumeClaim definitions
- Reference storage class configurations
## 🔗 Ingress Routes
Traefik handles ingress routing with:
- `ingress.yaml` templates in major services
- SSL termination via Letsencrypt
- Pretty hostname routing (e.g., `bitwarden.example.com`)
## 📚 Additional Resources
- [backup.md](backup.md) - Backup and recovery procedures
- Individual service notes in each subdirectory (notes.md, NOTES.md)

66
backup.md Normal file
View File

@@ -0,0 +1,66 @@
## 3⃣ Create the backup script on the master node
Create /usr/local/bin/backup-storage.sh:
sudo nano /usr/local/bin/backup-storage.sh
```
#!/bin/bash
set -e
SRC="/storage/"
DEST="debian@192.168.1.30:/backup/master-storage/"
SSH_KEY="/home/adrien/.ssh/id_backup"
LOG="/var/log/storage-backup.log"
echo "=== Backup started at $(date) ===" >> $LOG
rsync -aHAX --numeric-ids --delete \
--link-dest=/backup/master-storage/latest \
-e "ssh -i $SSH_KEY" \
"$SRC" "$DEST/$(date +%F)/" >> $LOG 2>&1
ssh -i $SSH_KEY debian@192.168.1.30 \
"ln -sfn /backup/master-storage/$(date +%F) /backup/master-storage/latest"
echo "=== Backup finished at $(date) ===" >> $LOG
```
### What this gives you
Daily folders (2025-01-14/)
A latest symlink
Efficient incremental backups
Clean deletions mirrored (--delete)
```
sudo crontab -e
0 2 * * * /usr/local/bin/backup-storage.sh
```
### 5⃣ Verify backups on vm-nfs
ls -lah /backup/master-storage
You should see:
2025-01-14/
latest -> 2025-01-14
## 6⃣ Restore (very important)
To restore everything:
rsync -aHAX /backup/master-storage/latest/ /storage/
To restore a single folder:
rsync -aHAX /backup/master-storage/2025-01-10/prometheus/ /storage/prometheus/

6
bitwarden/Chart.lock Normal file
View File

@@ -0,0 +1,6 @@
dependencies:
- name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 15.5.29
digest: sha256:e02780f5fb6cf25d49477b43986ea907d96df3167f5a398a34eedad988c841e7
generated: "2025-12-21T17:14:41.412181861Z"

11
bitwarden/Chart.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: v2
name: bitwarden-lite
description: Bitwarden Lite with Bitnami PostgreSQL subchart
type: application
version: 0.1.0
appVersion: "1.32.0"
dependencies:
- name: postgresql
version: 15.5.29
repository: https://charts.bitnami.com/bitnami

Binary file not shown.

30
bitwarden/notes.md Normal file
View File

@@ -0,0 +1,30 @@
# Bitwarden lite
https://bitwarden.com/help/install-and-deploy-lite
```
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm dependency build
helm upgrade --install bitwarden . -f values.yaml -n bitwarden
helm delete bitwarden -n bitwarden
kubectl -n bitwarden rollout restart deploy/bitwarden-lite
kubectl -n bitwarden create secret generic bitwarden-postgresql-auth \
--from-literal=postgres-password='pwdBitwardenSqlStorage' \
--from-literal=password='pwdBitwardenStorage'
kubectl -n bitwarden create secret generic bitwarden-smtp \
--from-literal=globalSettings__mail__smtp__host='smtp.gmail.com' \
--from-literal=globalSettings__mail__smtp__ssl='starttls' \
--from-literal=globalSettings__mail__smtp__username='adrcpp@gmail.com' \
--from-literal=globalSettings__mail__smtp__password='agkp arhk yapp rafi' \
--from-literal=globalSettings__mail__replyToEmail='adrcpp@gmail.com'
kubectl -n bitwarden get pods
```

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-bitwarden-data
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: bitwarden-data
local:
path: /storage/bitwarden
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-bitwarden-data
namespace: bitwarden
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: bitwarden-data

View File

@@ -0,0 +1,30 @@
{{- define "bitwarden-lite.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "bitwarden-lite.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s" (include "bitwarden-lite.name" .) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "bitwarden-lite.labels" -}}
app.kubernetes.io/name: {{ include "bitwarden-lite.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ printf "%s-%s" .Chart.Name .Chart.Version | quote }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "bitwarden-lite.selectorLabels" -}}
app.kubernetes.io/name: {{ include "bitwarden-lite.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}

View File

@@ -0,0 +1,53 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "bitwarden-lite.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "bitwarden-lite.fullname" . }}
template:
metadata:
labels:
app: {{ include "bitwarden-lite.fullname" . }}
spec:
containers:
- name: bitwarden
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
env:
- name: BW_DB_SERVER
value: {{ .Values.database.host | quote }}
- name: BW_DB_USERNAME
value: {{ .Values.database.user | quote }}
- name: BW_DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.postgresql.auth.existingSecret }}
key: {{ .Values.postgresql.auth.secretKeys.userPasswordKey | quote }}
- name: BW_DB_DATABASE
value: {{ .Values.database.name | quote }}
- name: BW_DB_PROVIDER
value: "postgresql"
- name: BW_DOMAIN
value: {{ .Values.bitwarden.domain | quote }}
- name: globalSettings__hibpApiKey
value: {{ .Values.hibp.apiKey | quote }}
- name: BW_INSTALLATION_ID
value: {{ .Values.bitwarden.installation.id | quote }}
- name: BW_INSTALLATION_KEY
value: {{ .Values.bitwarden.installation.key | quote }}
envFrom:
- secretRef:
name: bitwarden-smtp
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ default (printf "%s-data" (include "bitwarden-lite.fullname" .)) .Values.persistence.existingClaim }}

View File

@@ -0,0 +1,55 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "bitwarden-lite.fullname" . }}
labels:
{{- include "bitwarden-lite.labels" . | nindent 4 }}
{{- if .Values.ingress.annotations }}
annotations:
{{- toYaml .Values.ingress.annotations | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.bitwarden.domain | quote }}
http:
paths:
- path: /
{{- if semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion }}
pathType: Prefix
{{- end }}
backend:
service:
name: {{ include "bitwarden-lite.fullname" . }}
port:
number: {{ .Values.service.port }}
{{- range .Values.ingress.extraHosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
service:
name: {{ include "bitwarden-lite.fullname" . }}
port:
number: {{ .Values.service.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "bitwarden-lite.fullname" . }}
spec:
type: {{ .Values.service.type }}
selector:
app: {{ include "bitwarden-lite.fullname" . }}
ports:
- name: http
port: {{ .Values.service.port }}
targetPort: 8080

81
bitwarden/values.yaml Normal file
View File

@@ -0,0 +1,81 @@
image:
repository: ghcr.io/bitwarden/lite
tag: "2025.12.0"
pullPolicy: IfNotPresent
replicaCount: 1
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.entrypoints: websecure
hosts:
- host: bitwarden.immich-ad.ovh
paths:
- path: /
pathType: Prefix
tls:
- secretName: bitwarden-tls
hosts:
- bitwarden.immich-ad.ovh
# Persist bitwarden data (attachments, icon cache, etc.)
persistence:
enabled: true
existingClaim: pvc-bitwarden-data
bitwarden:
# REQUIRED for secure cookies, web vault, etc.
domain: "bitwarden.immich-ad.ovh"
disableUserRegistration: false
installation:
id: "bca307eb-c177-4eb7-b6a6-b3ba0129ff3d"
key: "x4FBfkK4f1wDCuXWQdX9"
# SMTP optional
smtp:
enabled: false
host: ""
port: 587
username: ""
password:
existingSecret: ""
key: "SMTP_PASSWORD"
from: ""
hibp:
apiKey: ""
# Database config
database:
name: bitwarden
user: bitwarden
# Bitnami PostgreSQL subchart values
postgresql:
enabled: true
image:
registry: docker.io
repository: bitnami/postgresql
tag: latest
auth:
username: bitwarden
database: bitwarden
# Upgrade-safe: point to an existing secret you create once
existingSecret: bitwarden-postgresql-auth
secretKeys:
adminPasswordKey: postgres-password
userPasswordKey: password
primary:
persistence:
enabled: true
existingClaim: pvc-bitwarden-data # bind to precreated PVC if you want

View File

@@ -0,0 +1,9 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
# Optional: make it the default StorageClass for the cluster
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

34
gitea/NOTES.MD Normal file
View File

@@ -0,0 +1,34 @@
```
kubectl create namespace gitea
helm repo add gitea-charts https://dl.gitea.com/charts/
helm repo update
helm upgrade --install gitea gitea-charts/gitea \
--namespace gitea \
-f values.yaml
```
## PV / PVC
```
kubectl create -f ./pv-gitea.yaml
kubectl create -f ./pvc-gitea.yaml
```
## Check
```
kubectl -n gitea get pods,pvc,ingress
kubectl -n gitea rollout status deploy/gitea
kubectl -n gitea get secret
```
## Show chart values - usefuel for override in value.yaml
```
helm show values gitea-charts/gitea | grep -A20 -B5 -i claim
helm show values gitea-charts/gitea | grep -A20 -B5 -i persistence
helm show values gitea-charts/gitea | grep -A20 -B5 -i storage
```

22
gitea/pv-gitea.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-gitea-data
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: gitea-data
local:
path: /storage/gitea
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

12
gitea/pvc-gitea.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-gitea-data
namespace: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: gitea-data

92
gitea/values.yaml Normal file
View File

@@ -0,0 +1,92 @@
# gitea-values.yaml
image:
rootless: true
strategy:
type: Recreate
postgresql:
enabled: false
postgresql-ha:
enabled: false
valkey-cluster:
enabled: false
redis-cluster:
enabled: false
persistence:
enabled: true
create: false
claimName: pvc-gitea-data
size: 20Gi
accessModes:
- ReadWriteOnce
gitea:
admin:
username: giteaadmin
password: "51aad51@#zé"
email: "admin@immich-ad.ovh"
config:
server:
DOMAIN: git.immich-ad.ovh
ROOT_URL: https://git.immich-ad.ovh/
SSH_DOMAIN: git.immich-ad.ovh
PROTOCOL: http
START_SSH_SERVER: false
database:
DB_TYPE: sqlite3
service:
DISABLE_REGISTRATION: true
REQUIRE_SIGNIN_VIEW: false
REGISTER_MANUAL_CONFIRM: false
session:
PROVIDER: memory
cache:
ADAPTER: memory
queue:
TYPE: level
service:
http:
type: ClusterIP
port: 3000
ssh:
enabled: false
ingress:
enabled: true
className: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.entrypoints: websecure
hosts:
- host: git.immich-ad.ovh
paths:
- path: /
pathType: Prefix
tls:
- secretName: gitea-tls
hosts:
- git.immich-ad.ovh
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
test:
enabled: false

View File

@@ -0,0 +1,6 @@
apiVersion: v2
name: immich-postgres
description: CloudNativePG Cluster for Immich with VectorChord
type: application
version: 0.1.0
appVersion: "16"

View File

@@ -0,0 +1,43 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: {{ .Values.cluster.name }}
spec:
instances: {{ .Values.cluster.instances }}
storage:
pvcTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: postgres-storage
volumeMode: Filesystem
imageName: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
postgresql:
shared_preload_libraries:
- "vchord.so"
# Optional: you can tweak resources, monitoring, etc. here.
# resources:
# requests:
# cpu: 100m
# memory: 512Mi
# limits:
# cpu: 2
# memory: 2Gi
bootstrap:
initdb:
database: {{ .Values.database.name }}
owner: {{ .Values.database.user }}
dataChecksums: true
secret:
name: {{ ternary .Values.database.existingSecret (printf "%s-app" .Values.cluster.name) (ne .Values.database.existingSecret "") }}
postInitApplicationSQL:
- ALTER USER {{ .Values.database.user }} WITH SUPERUSER;
- CREATE EXTENSION vchord CASCADE;
- CREATE EXTENSION earthdistance CASCADE;

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.cluster.name }}-app
type: kubernetes.io/basic-auth
stringData:
username: {{ .Values.database.user | quote }}
password: {{ .Values.database.password | quote }}
dbname: {{ .Values.database.name | quote }} # handy for Immich env, CNPG ignores this

View File

@@ -0,0 +1,16 @@
cluster:
name: immich-postgres # will also be used for services: immich-postgresql-rw, -ro, ...
instances: 1
storage:
size: 10Gi
image:
repository: ghcr.io/tensorchord/cloudnative-vectorchord
tag: "16.9-0.4.3"
database:
name: immich
user: immich
password: "change-me-immich" # for dev; in prod override via --set or external secret

47
immich/notes.md Normal file
View File

@@ -0,0 +1,47 @@
##immich-postgres:
A chart to deploy a cloudnative-pg specificly to be used by immich
Namespace: immich
### Helm
```
helm install immich-postgres ./immich-postgres -n immich
helm delete immich-postgres -n immich
helm upgrade --install immich immich/immich -n immich -f values-immich.yaml
```
## PV:
```
kubectl get pvc -n immich
kubectl get pv
```
## Logs:
```
kubectl -n immich logs <pod> --prefix
```
## Monitoring:
```
kubectl -n immich get svc
kubectl -n immich get pods
kubectl -n immich describe
```
## Traefik ingress
https://doc.traefik.io/traefik/getting-started/kubernetes/
## cert manager in the cluster
https://www.slingacademy.com/article/how-to-set-up-ssl-with-lets-encrypt-in-kubernetes/
## Certificate:
```
kubectl -n immich get certificate
kubectl -n immich describe certificate immich-tls
kubectl -n immich get challenges
```

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-master-node
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /storage/immich-data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

22
immich/pv-postgres.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-postgres
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: postgres-storage
local:
path: /storage/immich-data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

12
immich/pvc-immich.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-immich
namespace: immich
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: local-storage

131
immich/values-immich.yaml Normal file
View File

@@ -0,0 +1,131 @@
## This chart relies on the common library chart from bjw-s
## You can find it at https://github.com/bjw-s-labs/helm-charts/tree/common-4.3.0/charts/library/common
## Refer there for more detail about the supported values
controllers:
main:
containers:
main:
image:
tag: v2.6.3
env:
REDIS_HOSTNAME: '{{ printf "%s-valkey" .Release.Name }}'
IMMICH_MACHINE_LEARNING_URL: '{{ printf "http://%s-machine-learning:3003" .Release.Name }}'
DB_HOSTNAME: "immich-postgres-rw"
DB_PORT: "5432"
# Database name matches what we set in the CNPG cluster
DB_DATABASE_NAME: "immich"
# Credentials: reuse the CNPG bootstrap secret
DB_USERNAME:
valueFrom:
secretKeyRef:
name: immich-postgres-app
key: username
DB_PASSWORD:
valueFrom:
secretKeyRef:
name: immich-postgres-app
key: password
immich:
metrics:
# Enabling this will create the service monitors needed to monitor immich with the prometheus operator
enabled: false
persistence:
# Main data store for all photos shared between different components.
library:
# Automatically creating the library volume is not supported by this chart
# You have to specify an existing PVC to use
existingClaim: pvc-immich
# configuration is immich-config.json converted to yaml
# ref: https://immich.app/docs/install/config-file/
#
configuration:
# trash:
# enabled: false
# days: 30
storageTemplate:
enabled: true
template: "{{y}}/{{y}}-{{MM}}/{{filename}}"
# Dependencies
valkey:
enabled: true
controllers:
main:
containers:
main:
image:
repository: docker.io/valkey/valkey
tag: 9.0-alpine@sha256:b4ee67d73e00393e712accc72cfd7003b87d0fcd63f0eba798b23251bfc9c394
pullPolicy: IfNotPresent
persistence:
data:
enabled: true
size: 1Gi
# Optional: Set this to persistentVolumeClaim to keep job queues persistent
type: emptyDir
accessMode: ReadWriteOnce
storageClass: local-storage
# Immich components
server:
enabled: true
controllers:
main:
containers:
main:
image:
repository: ghcr.io/immich-app/immich-server
pullPolicy: IfNotPresent
ingress:
main:
enabled: true
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/proxy-body-size: "0"
hosts:
- host: immich-ad.ovh
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- immich-ad.ovh
secretName: immich-tls
service:
main:
type: ClusterIP
ports:
http:
port: 2283
targetPort: 2283
machine-learning:
enabled: true
controllers:
main:
containers:
main:
image:
repository: ghcr.io/immich-app/immich-machine-learning
pullPolicy: IfNotPresent
env:
TRANSFORMERS_CACHE: /cache
HF_XET_CACHE: /cache/huggingface-xet
MPLCONFIGDIR: /cache/matplotlib-config
persistence:
cache:
enabled: true
size: 10Gi
# Optional: Set this to persistentVolumeClaim to avoid downloading the ML models every start.
type: emptyDir
accessMode: ReadWriteMany
# storageClass: your-class

15
letsencrypt/values.yaml Normal file
View File

@@ -0,0 +1,15 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: adrcpp@gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: traefik

18
mumble/value.yaml Normal file
View File

@@ -0,0 +1,18 @@
# https://artifacthub.io/packages/helm/syntaxerror404/mumble
persistence:
enabled: false
config:
welcometext: "Welcome to our Mumble server!"
registerName: "Apex Legend forever bronze"
users: "10"
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
memory: 128Mi
service:
type: NodePort

57
nextcloud/NOTES.md Normal file
View File

@@ -0,0 +1,57 @@
## Config
https://github.com/nextcloud/helm/blob/main/charts/nextcloud/README.md
```
kubectl top pods --all-namespaces
helm repo add nextcloud https://nextcloud.github.io/helm/
helm install nextcloud nextcloud/nextcloud -f values.yaml -n nextcloud
helm upgrade --install nextcloud nextcloud/nextcloud -f values.yaml -n nextcloud
helm delete nextcloud -n nextcloud
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- \
php occ maintenance:mode --on
kubectl exec -it -n nextcloud deploy/nextcloud -c nextcloud -- bash
```
nextcloud.immich-ad.ovh/
## PV / PVC
```
kubectl create -f ./pv-postgres.yaml
kubectl create -f ./pvc-nextcloud.yaml
```
## Service
```
kubectl -n nextcloud get svc
kubectl -n nextcloud get pods
```
## Certificates
```
kubectl -n nextcloud get certificate
kubectl -n nextcloud describe certificate nextcloud-tls
kubectl -n nextcloud get challenges
```
## Updates:
```
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ status
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ maintenance:mode
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ upgrade
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ maintenance:repair
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ db:add-missing-indices
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ db:add-missing-columns
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ db:add-missing-primary-keys
kubectl exec -n nextcloud deploy/nextcloud -c nextcloud -- php occ maintenance:mode --off
```

View File

@@ -0,0 +1,24 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: notify-push
namespace: nextcloud
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
ingressClassName: traefik
tls:
- hosts:
- nextcloud.immich-ad.ovh
secretName: nextcloud-tls
rules:
- host: nextcloud.immich-ad.ovh
http:
paths:
- path: /push
pathType: Prefix
backend:
service:
name: notify-push
port:
number: 7867

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: notify-push
namespace: nextcloud
spec:
selector:
app: notify-push
ports:
- name: http
port: 7867
targetPort: 7867

View File

@@ -0,0 +1,92 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: notify-push
namespace: nextcloud
spec:
replicas: 1
selector:
matchLabels:
app: notify-push
template:
metadata:
labels:
app: notify-push
spec:
initContainers:
- name: fetch-notify-push
image: alpine:3.21
command: ["sh","-lc"]
args:
- |
set -eu
apk add --no-cache curl
VER="1.3.0"
URL="https://github.com/nextcloud/notify_push/releases/download/v${VER}/notify_push-aarch64-unknown-linux-musl"
echo "Downloading $URL"
curl -fsSL "$URL" -o /shared/notify_push
chmod +x /shared/notify_push
/shared/notify_push --help | head -n 5
volumeMounts:
- name: shared
mountPath: /shared
containers:
- name: notify-push
image: alpine:3.21
command: ["/shared/notify_push"]
args:
- "--port"
- "7867"
ports:
- name: http
containerPort: 7867
env:
# Nextcloud
- name: NEXTCLOUD_URL
value: "https://nextcloud.immich-ad.ovh"
envFrom:
- secretRef:
name: notify-push-db
- secretRef:
name: notify-push-redis
# # Redis
# - name: REDIS_HOST
# value: "nextcloud-redis-master"
# - name: REDIS_PASSWORD
# valueFrom:
# secretKeyRef:
# name: nextcloud-redis
# key: redis-password
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
# readinessProbe:
# httpGet:
# path: /
# port: 7867
# initialDelaySeconds: 10
# periodSeconds: 10
# livenessProbe:
# httpGet:
# path: /
# port: 7867
# initialDelaySeconds: 30
# periodSeconds: 20
volumeMounts:
- name: shared
mountPath: /shared
- name: nextcloud-data
mountPath: /nextcloud
readOnly: true
volumes:
- name: shared
emptyDir: {}
- name: nextcloud-data
persistentVolumeClaim:
claimName: pvc-nextcloud-data

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nextcloud-data
spec:
capacity:
storage: 50Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nextcloud-data
local:
path: /storage/nextcloud
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nextcloud-postgres
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nextcloud-postgres-storage
local:
path: /storage/nextcloud-postgres
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nextcloud-data
namespace: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nextcloud-data

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nextcloud-postgres
namespace: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nextcloud-postgres-storage

868
nextcloud/values.yaml Normal file
View File

@@ -0,0 +1,868 @@
global:
image:
# -- if set it will overwrite all registry entries
registry:
security:
# required for bitnamilegacy repos
allowInsecureImages: true
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
registry: docker.io
repository: library/nextcloud
flavor: apache
# default is generated by flavor and appVersion
tag: 33.0.1-apache
pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
nameOverride: ""
fullnameOverride: ""
podAnnotations: {}
podLabels: {}
deploymentAnnotations: {}
deploymentLabels: {}
# Number of replicas to be deployed
replicaCount: 1
## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
enabled: true
className: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/proxy-body-size: "0"
# HSTS
traefik.ingress.kubernetes.io/headers.customResponseHeaders.Strict-Transport-Security: "max-age=15552000; includeSubDomains; preload"
hosts:
- host: nextcloud.immich-ad.ovh
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- nextcloud.immich-ad.ovh
secretName: nextcloud-tls
labels: {}
path: /
pathType: Prefix
# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
lifecycle: {}
# lifecycle:
# postStartCommand: []
# preStopCommand: []
phpClientHttpsFix:
enabled: false
protocol: https
nextcloud:
host: nextcloud.immich-ad.ovh
username: admin
password: changeme
## Use an existing secret
existingSecret:
enabled: false
# secretName: nameofsecret
usernameKey: nextcloud-username
passwordKey: nextcloud-password
tokenKey: ""
smtpUsernameKey: smtp-username
smtpPasswordKey: smtp-password
smtpHostKey: smtp-host
update: 0
# If web server is not binding default port, you can define it
containerPort: 80
datadir: /var/www/html/data
persistence:
subPath:
# if set, we'll template this list to the NEXTCLOUD_TRUSTED_DOMAINS env var
trustedDomains: ["nextcloud.nextcloud.svc.cluster.local","nextcloud.immich-ad.ovh", "nextcloud", "localhost"]
## SMTP configuration
mail:
enabled: true
# the user we send email as
fromAddress: admin
# the domain we send email from
domain: immich-ad.ovh
smtp:
host: ssl0.ovh.net
secure: starttls
port: 587
authtype: LOGIN
name: 'admin@immich-ad.ovh'
password: ',3FV\]Knv_AqC'
## PHP Configuration files
# Will be injected in /usr/local/etc/php/conf.d for apache image and in /usr/local/etc/php-fpm.d when nginx.enabled: true
phpConfigs:
zzz-memory.ini: |
memory_limit = 1024M
max_execution_time = 360
upload_max_filesize = 2G
post_max_size = 2G
opcache.ini: |
opcache.enable=1
opcache.memory_consumption=256
opcache.interned_strings_buffer=32
opcache.max_accelerated_files=20000
opcache.revalidate_freq=60
opcache.save_comments=1
opcache.fast_shutdown=1
## Default config files that utilize environment variables:
# see: https://github.com/nextcloud/docker/tree/master#auto-configuration-via-environment-variables
# IMPORTANT: Will be used only if you put extra configs, otherwise default will come from nextcloud itself
# Default confgurations can be found here: https://github.com/nextcloud/docker/tree/master/.config
defaultConfigs:
# To protect /var/www/html/config
.htaccess: true
# Apache configuration for rewrite urls
apache-pretty-urls.config.php: true
# Define APCu as local cache
apcu.config.php: true
# Apps directory configs
apps.config.php: true
# Used for auto configure database
autoconfig.php: true
# Redis default configuration
redis.config.php: |-
<?php
$CONFIG = [
'memcache.locking' => '\OC\Memcache\Redis',
'memcache.local' => '\OC\Memcache\APCu',
'redis' => [
'host' => 'nextcloud-redis-master',
'port' => 6379,
'password' => 'StrongRedisPass',
'timeout' => 1.5,
],
];
# Reverse proxy default configuration
reverse-proxy.config.php: true
# S3 Object Storage as primary storage
s3.config.php: true
# SMTP default configuration via environment variables
smtp.config.php: true
# Swift Object Storage as primary storage
swift.config.php: true
# disables the web based updater as the default nextcloud docker image does not support it
upgrade-disable-web.config.php: true
# -- imaginary support config
imaginary.config.php: false
# Extra config files created in /var/www/html/config/
# ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
configs:
audit.config.php: |-
<?php
$CONFIG = array (
'log_type_audit' => 'syslog',
'syslog_tag_audit' => 'Nextcloud',
'logfile_audit' => '',
);
# For example, to enable image and text file previews:
# previews.config.php: |-
# <?php
# $CONFIG = array (
# 'enable_previews' => true,
# 'enabledPreviewProviders' => array (
# 'OC\Preview\Movie',
# 'OC\Preview\PNG',
# 'OC\Preview\JPEG',
# 'OC\Preview\GIF',
# 'OC\Preview\BMP',
# 'OC\Preview\XBitmap',
# 'OC\Preview\MP3',
# 'OC\Preview\MP4',
# 'OC\Preview\TXT',
# 'OC\Preview\MarkDown',
# 'OC\Preview\PDF'
# ),
# );
# Hooks for auto configuration
# Here you could write small scripts which are placed in `/docker-entrypoint-hooks.d/<hook-name>/helm.sh`
# ref: https://github.com/nextcloud/docker?tab=readme-ov-file#auto-configuration-via-hook-folders
hooks:
pre-installation:
post-installation:
pre-upgrade:
post-upgrade:
before-starting:
## Strategy used to replace old pods
## IMPORTANT: use with care, it is suggested to leave as that for upgrade purposes
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
strategy:
type: Recreate
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
##
## Extra environment variables
extraEnv:
- name: OVERWRITEPROTOCOL
value: https
- name: OVERWRITECLIURL
value: https://nextcloud.immich-ad.ovh
- name: TRUSTED_PROXIES
value: "10.244.0.0/16"
# Extra init containers that runs before pods start.
extraInitContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
# Extra sidecar containers.
extraSidecarContainers: []
# - name: nextcloud-logger
# image: busybox
# command: [/bin/sh, -c, 'while ! test -f "/run/nextcloud/data/nextcloud.log"; do sleep 1; done; tail -n+1 -f /run/nextcloud/data/nextcloud.log']
# volumeMounts:
# - name: nextcloud-data
# mountPath: /run/nextcloud/data
# Extra mounts for the pods. Example shown is for connecting a legacy NFS volume
# to NextCloud pods in Kubernetes. This can then be configured in External Storage
extraVolumes:
# - name: nfs
# nfs:
# server: "10.0.0.1"
# path: "/nextcloud_data"
# readOnly: false
extraVolumeMounts:
# - name: nfs
# mountPath: "/legacy_data"
# Set securityContext parameters for the nextcloud CONTAINER only (will not affect nginx container).
# For example, you may need to define runAsNonRoot directive
securityContext: {}
# runAsUser: 33
# runAsGroup: 33
# runAsNonRoot: true
# readOnlyRootFilesystem: false
# Set securityContext parameters for the entire pod. For example, you may need to define runAsNonRoot directive
podSecurityContext: {}
# runAsUser: 33
# runAsGroup: 33
# runAsNonRoot: true
# readOnlyRootFilesystem: false
# Settings for the MariaDB init container
mariaDbInitContainer:
resources: {}
# Set mariadb initContainer securityContext parameters. For example, you may need to define runAsNonRoot directive
securityContext: {}
# Settings for the PostgreSQL init container
postgreSqlInitContainer:
resources: {}
# Set postgresql initContainer securityContext parameters. For example, you may need to define runAsNonRoot directive
securityContext: {}
# -- priority class for nextcloud.
# Overrides .Values.priorityClassName
priorityClassName: ""
##
## External database configuration
##
externalDatabase:
enabled: true
type: postgresql
host: nextcloud-postgresql # service name of subchart (default)
#user: nextcloud
#database: nextcloud
#password: "MyStrongPass123"
existingSecret:
enabled: true
secretName: nextcloud-db
passwordKey: password
##
## PostgreSQL chart configuration
## for more options see https://github.com/bitnami/charts/tree/main/bitnami/postgresql
##
postgresql:
enabled: true
image:
registry: docker.io
repository: bitnamilegacy/postgresql
global:
postgresql:
# global.postgresql.auth overrides postgresql.auth
#auth:
# username: nextcloud
# password: "MyStrongPass123"
# database: nextcloud
auth:
#username: nextcloud
#database: nextcloud
existingSecret: nextcloud-postgresql
primary:
resources:
requests:
memory: 512Mi
limits:
memory: 1Gi
persistence:
enabled: true
# Use an existing Persistent Volume Claim (must be created ahead of time)
existingClaim: pvc-nextcloud-postgres
storageClass: nextcloud-postgres-storage
##
## Collabora chart configuration
## for more options see https://github.com/CollaboraOnline/online/tree/master/kubernetes/helm/collabora-online
##
collabora:
enabled: true
# url in admin should be: https://collabora.immich-ad.ovh
collabora:
## HTTPS nextcloud domain, if needed
aliasgroups:
- host: https://nextcloud.immich-ad.ovh:443
securityContext:
privileged: true
env:
# We terminate TLS at Traefik, so Collabora must not try to do HTTPS itself
- name: DONT_GEN_SSL_CERT
value: "true"
# Tell Collabora which Nextcloud URL is allowed to use it
- name: aliasgroup1
value: https://nextcloud.immich-ad.ovh:443
# set extra parameters for collabora
# you may need to add --o:ssl.termination=true
extra_params: >
--o:ssl.enable=false
--o:ssl.termination=true
## Specify server_name when the hostname is not reachable directly for
# example behind reverse-proxy. example: collabora.domain
server_name: null
existingSecret:
# set to true to to get collabora admin credentials from an existin secret
# if set, ignores collabora.collabora.username and password
enabled: false
# name of existing Kubernetes Secret with collboara admin credentials
secretName: ""
usernameKey: "username"
passwordKey: "password"
# setup admin login credentials, these are ignored if
# collabora.collabora.existingSecret.enabled=true
password: examplepass
username: admin
# setup ingress
ingress:
enabled: true
className: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/proxy-body-size: "0"
traefik.ingress.kubernetes.io/router.tls: "true"
hosts:
- host: collabora.immich-ad.ovh
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- collabora.immich-ad.ovh
secretName: collabora-tls
# see collabora helm README.md for recommended values
resources: {}
readinessProbe:
enabled: true
path: /hosting/discovery
port: 9980
scheme: HTTP
initialDelaySeconds: 40
periodSeconds: 20
timeoutSeconds: 5
failureThreshold: 6
livenessProbe:
enabled: true
path: /hosting/discovery
port: 9980
scheme: HTTP
initialDelaySeconds: 60
## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#cron
##
cronjob:
enabled: true
# Either 'sidecar' or 'cronjob'
type: sidecar
# Runs crond as a sidecar container in the Nextcloud pod
# Note: crond requires root
sidecar:
## Cronjob sidecar resource requests and limits
## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
##
resources: {}
# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
lifecycle: {}
# lifecycle:
# postStartCommand: []
# preStopCommand: []
# Set securityContext parameters. For example, you may need to define runAsNonRoot directive
securityContext: {}
# runAsUser: 33
# runAsGroup: 33
# runAsNonRoot: true
# readOnlyRootFilesystem: true
# The command the cronjob container executes.
command:
- /cron.sh
# Uses a Kubernetes CronJob to execute the Nextcloud cron tasks
# Note: can run as non-root user. Should run as same user as the Nextcloud pod.
cronjob:
# Use a CronJob instead of crond sidecar container
# crond does not work when not running as root user
# Note: requires `persistence.enabled=true`
schedule: "*/5 * * * *"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 5
# -- Additional labels for cronjob
labels: {}
# -- Additional labels for cronjob pod
podLabels: {}
annotations: {}
backoffLimit: 1
affinity: {}
# Often RWO volumes are used. But the cronjob pod needs access to the same volume as the nextcloud pod.
# Depending on your provider two pods on the same node can still access the same volume.
# Following config ensures that the cronjob pod is scheduled on the same node as the nextcloud pod.
# affinity:
# podAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: app.kubernetes.io/name
# operator: In
# values:
# - nextcloud
# - key: app.kubernetes.io/component
# operator: In
# values:
# - app
# topologyKey: kubernetes.io/hostname
## Resource requests and limits
## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
##
resources: {}
# -- priority class for the cron job.
# Overrides .Values.priorityClassName
priorityClassName: ""
# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# Set securityContext parameters. For example, you may need to define runAsNonRoot directive
securityContext: {}
# runAsUser: 33
# runAsGroup: 33
# runAsNonRoot: true
# readOnlyRootFilesystem: true
# The command to run in the cronjob container
# Example to incerase memory limit: php -d memory_limit=2G ...
command:
- php
- -f
- /var/www/html/cron.php
- --
- --verbose
service:
type: ClusterIP
port: 8080
loadBalancerIP: ""
nodePort:
# -- use additional annotation on service for nextcloud
annotations: {}
# -- Set this to "ClientIP" to make sure that connections from the same client
# are passed to the same Nextcloud pod each time.
sessionAffinity: ""
sessionAffinityConfig: {}
## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
##
persistence:
# Nextcloud Data (/var/www/html)
enabled: true
existingClaim: pvc-nextcloud-data
storageClass: nextcloud-data
## Use an additional pvc for the data directory rather than a subpath of the default PVC
## Useful to store data on a different storageClass (e.g. on slower disks)
nextcloudData:
enabled: false
subPath:
labels: {}
annotations: {}
# storageClass: "-"
# existingClaim:
accessMode: ReadWriteOnce
size: 8Gi
redis:
enabled: yes
architecture: standalone
auth:
enabled: true
password: "StrongRedisPass"
master:
persistence:
enabled: false
size: 1Gi
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# resources:
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- Priority class for pods. This is the _default_
# priority class for pods created by this deployment - it may be
# overridden by more specific instances of priorityClassName -
# e.g. cronjob.cronjob.priorityClassName
priorityClassName: ""
## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 20
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 50
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 30
successThreshold: 1
## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
enabled: false
cputhreshold: 60
minPods: 1
maxPods: 10
nodeSelector: {}
tolerations: []
# -- Nextcloud pod topologySpreadConstraints
topologySpreadConstraints: []
affinity: {}
dnsConfig: {}
# Custom dns config for Nextcloud containers.
# You can for example configure ndots. This may be needed in some clusters with alpine images.
# options:
# - name: ndots
# value: "1"
imaginary:
# -- Start Imgaginary
enabled: false
# -- Number of imaginary pod replicas to deploy
replicaCount: 1
image:
# -- Imaginary image registry
registry: docker.io
# -- Imaginary image name
repository: h2non/imaginary
# -- Imaginary image tag
tag: 1.2.4
# -- Imaginary image pull policy
pullPolicy: IfNotPresent
# -- Imaginary image pull secrets
pullSecrets: []
# -- Additional annotations for imaginary
podAnnotations: {}
# -- Additional labels for imaginary
podLabels: {}
# -- Imaginary pod nodeSelector
nodeSelector: {}
# -- Imaginary pod tolerations
tolerations: []
# -- Imaginary pod topologySpreadConstraints
topologySpreadConstraints: []
# -- imaginary resources
resources: {}
# -- priority class for imaginary.
# Overrides .Values.priorityClassName
priorityClassName: ""
# -- Optional security context for the Imaginary container
securityContext:
runAsUser: 1000
runAsNonRoot: true
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# -- Optional security context for the Imaginary pod (applies to all containers in the pod)
podSecurityContext: {}
# runAsNonRoot: true
# seccompProfile:
# type: RuntimeDefault
readinessProbe:
enabled: true
failureThreshold: 3
successThreshold: 1
periodSeconds: 10
timeoutSeconds: 1
livenessProbe:
enabled: true
failureThreshold: 3
successThreshold: 1
periodSeconds: 10
timeoutSeconds: 1
service:
# -- Imaginary: Kubernetes Service type
type: ClusterIP
# -- Imaginary: LoadBalancerIp for service type LoadBalancer
loadBalancerIP:
# -- Imaginary: NodePort for service type NodePort
nodePort:
# -- Additional annotations for service imaginary
annotations: {}
# -- Additional labels for service imaginary
labels: {}
## Prometheus Exporter / Metrics
##
metrics:
enabled: false
replicaCount: 1
# Optional: becomes NEXTCLOUD_SERVER env var in the nextcloud-exporter container.
# Without it, we will use the full name of the nextcloud service
server: ""
# The metrics exporter needs to know how you serve Nextcloud either http or https
https: false
# Use API token if set, otherwise fall back to password authentication
# https://github.com/xperimental/nextcloud-exporter#token-authentication
# Currently you still need to set the token manually in your nextcloud install
token: ""
timeout: 5s
# if set to true, exporter skips certificate verification of Nextcloud server.
tlsSkipVerify: false
info:
# Optional: becomes NEXTCLOUD_INFO_APPS env var in the nextcloud-exporter container.
# Enables gathering of apps-related metrics. Defaults to false
apps: false
update: false
image:
registry: docker.io
repository: xperimental/nextcloud-exporter
tag: 0.8.0
pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
## Metrics exporter resource requests and limits
## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
##
resources: {}
# -- Metrics exporter pod Annotation
podAnnotations: {}
# -- Metrics exporter pod Labels
podLabels: {}
# -- Metrics exporter pod nodeSelector
nodeSelector: {}
# -- Metrics exporter pod tolerations
tolerations: []
# -- Metrics exporter pod affinity
affinity: {}
service:
type: ClusterIP
# Use serviceLoadBalancerIP to request a specific static IP,
# otherwise leave blank
loadBalancerIP:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9205"
labels: {}
# -- security context for the metrics CONTAINER in the pod
securityContext:
runAsUser: 1000
runAsNonRoot: true
# allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# -- security context for the metrics POD
podSecurityContext: {}
# runAsNonRoot: true
# seccompProfile:
# type: RuntimeDefault
## Prometheus Operator ServiceMonitor configuration
##
serviceMonitor:
## @param metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using PrometheusOperator
##
enabled: false
## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
##
namespace: ""
## @param metrics.serviceMonitor.namespaceSelector The selector of the namespace where the target service is located (defaults to the release namespace)
namespaceSelector:
## @param metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus.
##
jobLabel: ""
## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped
# ref: https://prometheus-operator.dev/docs/api-reference/api/#monitoring.coreos.com/v1.Endpoint
##
interval: 30s
## @param metrics.serviceMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
# ref: https://prometheus-operator.dev/docs/api-reference/api/#monitoring.coreos.com/v1.Endpoint
##
scrapeTimeout: ""
## @param metrics.serviceMonitor.labels Extra labels for the ServiceMonitor
##
labels: {}
rules:
# -- Deploy Prometheus Rules (Alerts) for the exporter
# @section -- Metrics
enabled: false
# -- Label on Prometheus Rules CRD Manifest
# @section -- Metrics
labels: {}
defaults:
# -- Add Default Rules
# @section -- Metrics
enabled: true
# -- Label on the rules (the severity is already set)
# @section -- Metrics
labels: {}
# -- Filter on metrics on alerts (default just for this helm-chart)
# @section -- Metrics
filter: ""
# -- Add own Rules to Prometheus Rules
# @section -- Metrics
additionalRules: []
# -- Allows users to inject additional Kubernetes manifests (YAML) to be rendered with the release.
# Could either be a list or a map
# If a map, each key is the name of the manifest.
# If an array, each item is a manifest, which can be a string (YAML block) or a YAML object.
# Each item should be a string containing valid YAML. Example:
# extraManifests:
# - |
# apiVersion: traefik.containo.us/v1alpha1
# kind: Middleware
# metadata:
# name: my-middleware
# spec:
# ...
# - |
# apiVersion: traefik.containo.us/v1alpha1
# kind: IngressRoute
# metadata:
# name: my-ingressroute
# spec:
# ...
# Or as a map:
# extraManifests:
# my-middleware:
# apiVersion: traefik.containo.us/v1alpha1
# kind: Middleware
# metadata:
# name: my-middleware
# spec:
# ...
# my-ingressroute:
# apiVersion: traefik.containo.us/v1alpha1
# kind: IngressRoute
# metadata:
# name: my-ingressroute
# spec:
# ...
extraManifests: []

View File

@@ -0,0 +1,124 @@
controller:
type: daemonset
alloy:
configMap:
create: true
content: |
logging {
level = "info"
}
loki.write "default" {
endpoint {
url = "http://loki.observability.svc.cluster.local:3100/loki/api/v1/push"
}
}
// discovery.kubernetes allows you to find scrape targets from Kubernetes resources.
// It watches cluster state and ensures targets are continually synced with what is currently running in your cluster.
discovery.kubernetes "pod" {
role = "pod"
// Restrict to pods on the node to reduce cpu & memory usage
selectors {
role = "pod"
field = "spec.nodeName=" + coalesce(sys.env("HOSTNAME"), constants.hostname)
}
}
// discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.
// If no rules are defined, then the input targets are exported as-is.
discovery.relabel "pod_logs" {
targets = discovery.kubernetes.pod.targets
// Label creation - "namespace" field from "__meta_kubernetes_namespace"
rule {
source_labels = ["__meta_kubernetes_namespace"]
action = "replace"
target_label = "namespace"
}
// Label creation - "pod" field from "__meta_kubernetes_pod_name"
rule {
source_labels = ["__meta_kubernetes_pod_name"]
action = "replace"
target_label = "pod"
}
// Label creation - "container" field from "__meta_kubernetes_pod_container_name"
rule {
source_labels = ["__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "container"
}
// Label creation - "app" field from "__meta_kubernetes_pod_label_app_kubernetes_io_name"
rule {
source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
action = "replace"
target_label = "app"
}
// Label creation - "job" field from "__meta_kubernetes_namespace" and "__meta_kubernetes_pod_container_name"
// Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name
rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "job"
separator = "/"
replacement = "$1"
}
// Label creation - "__path__" field from "__meta_kubernetes_pod_uid" and "__meta_kubernetes_pod_container_name"
// Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log
rule {
source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "__path__"
separator = "/"
replacement = "/var/log/pods/*$1/*.log"
}
// Label creation - "container_runtime" field from "__meta_kubernetes_pod_container_id"
rule {
source_labels = ["__meta_kubernetes_pod_container_id"]
action = "replace"
target_label = "container_runtime"
regex = "^(\\S+):\\/\\/.+$"
replacement = "$1"
}
}
// loki.source.kubernetes tails logs from Kubernetes containers using the Kubernetes API.
loki.source.kubernetes "pod_logs" {
targets = discovery.relabel.pod_logs.output
forward_to = [loki.process.pod_logs.receiver]
}
// loki.process receives log entries from other Loki components, applies one or more processing stages,
// and forwards the results to the list of receivers in the component's arguments.
loki.process "pod_logs" {
stage.static_labels {
values = {
cluster = "master",
}
}
forward_to = [loki.write.default.receiver]
}
extraVolumes:
- name: varlog
hostPath:
path: /var/log
extraVolumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 300m
memory: 256Mi

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-loki-data
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: loki-data
local:
path: /storage/loki
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-loki-data
namespace: observability
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: loki-data

View File

@@ -0,0 +1,38 @@
adminUser: admin
adminPassword: "admin" # or use an existingSecret
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 300m
memory: 512Mi
persistence:
enabled: true
storageClassName: loki-data
existingClaim: pvc-loki-data
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://loki.observability.svc.cluster.local:3100
isDefault: true
ingress:
enabled: true
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.entrypoints: websecure
hosts:
- grafana.immich-ad.ovh
tls:
- secretName: grafana-tls
hosts:
- grafana.immich-ad.ovh

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: storage-loki-0
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
local:
path: /storage/loki-data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,59 @@
# simplest storage for homelab: filesystem + PVC (works well for small volume)
loki:
auth_enabled: false
commonConfig:
replication_factor: 1
storage:
type: filesystem
schemaConfig:
configs:
- from: "2024-01-01"
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: loki_index_
period: 24h
limits_config:
retention_period: 14d
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 768Mi
persistence:
enabled: true
size: 10Gi
deploymentMode: SingleBinary
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
singleBinary:
replicas: 1
promtail:
enabled: false
prometheus:
enabled: false
canary:
enabled: false
gateway:
enabled: false
results_cache:
enabled: false
chunks_cache:
enabled: false
memcached:
enabled: false
memberlist:
service:
enabled: false

26
observability/notes.md Normal file
View File

@@ -0,0 +1,26 @@
```
helm upgrade --install grafana grafana/grafana -n observability -f values.yaml
helm delete grafana -n observability
helm upgrade --install loki grafana/loki -n observability -f values.yaml
helm delete loki -n observability
helm upgrade --install alloy grafana/alloy -n observability -f values.yaml
helm delete alloy -n observability
helm upgrade --install kps prometheus-community/kube-prometheus-stack \
-n observability -f values.yaml
helm delete kps -n observability
kubectl get pods -n observability
kubectl -n observability describe pod loki-0
kubectl logs -n observability loki-0 --tail=200
```

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-prometheus-data
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: prometheus-data
local:
path: /storage/prometheus
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,7 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prometheus-data
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain

View File

@@ -0,0 +1,27 @@
grafana:
enabled: false # you already run Grafana
alertmanager:
enabled: false # keep it light (enable later if you want)
prometheus:
prometheusSpec:
replicas: 1
retention: 7d
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: prometheus-data
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: shared-postgres-nodeport
namespace: db
spec:
type: NodePort
selector:
cnpg.io/cluster: shared-postgres
role: primary
ports:
- name: postgres
port: 5432
targetPort: 5432
nodePort: 30432

28
shared-db/postgres.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: shared-postgres
namespace: db
spec:
instances: 1
storage:
pvcTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: pvc-shared-postgres
volumeMode: Filesystem
imageName: ghcr.io/cloudnative-pg/postgresql:16
bootstrap:
initdb:
database: dbtest
owner: admin
secret:
name: shared-postgres-app
postInitApplicationSQL:
- ALTER USER admin WITH SUPERUSER;
- CREATE EXTENSION vector;

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-shared-postgres
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: shared-postgres
local:
path: /storage/shared-postgres
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-shared-postgres
namespace: db
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: shared-postgres

14
traefik/values.yaml Normal file
View File

@@ -0,0 +1,14 @@
ingressRoute:
dashboard:
enabled: true
matchRule: Host(`dashboard.localhost`)
entryPoints:
- web
providers:
kubernetesGateway:
enabled: true
gateway:
listeners:
web:
namespacePolicy:
from: All

6
vaultwarden/Chart.lock Normal file
View File

@@ -0,0 +1,6 @@
dependencies:
- name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 15.5.29
digest: sha256:e02780f5fb6cf25d49477b43986ea907d96df3167f5a398a34eedad988c841e7
generated: "2025-12-21T17:14:41.412181861Z"

11
vaultwarden/Chart.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: v2
name: vaultwarden
description: Vaultwarden with Bitnami PostgreSQL subchart
type: application
version: 0.1.0
appVersion: "1.32.0"
dependencies:
- name: postgresql
version: 15.5.29
repository: https://charts.bitnami.com/bitnami

Binary file not shown.

32
vaultwarden/notes.md Normal file
View File

@@ -0,0 +1,32 @@
# vaultwarden lite
https://vaultwarden.com/help/install-and-deploy
https://github.com/dani-garcia/vaultwarden/wiki/Using-the-PostgreSQL-Backend
```
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm dependency build
helm upgrade --install vaultwarden . -f values.yaml -n vaultwarden
helm delete vaultwarden -n vaultwarden
kubectl -n vaultwarden rollout restart deploy/vaultwarden
kubectl -n vaultwarden create secret generic vaultwarden-postgresql-auth \
--from-literal=postgres-password='pwdvaultwardenSqlStorage' \
--from-literal=password='pwdvaultwardenStorage'
kubectl -n vaultwarden create secret generic vaultwarden-db-url \
--from-literal=DATABASE_URL='postgresql://vaultwarden:pwdvaultwardenStorage@vaultwarden-postgresql:5432/vaultwarden'
kubectl -n vaultwarden create secret generic vaultwarden-smtp \
--from-literal=SMTP_HOST='ssl0.ovh.net' \
--from-literal=SMTP_PORT='587' \
--from-literal=SMTP_SECURITY='starttls' \
--from-literal=SMTP_USERNAME='admin@immich-ad.ovh' \
--from-literal=SMTP_PASSWORD=',3FV\]Knv_AqC' \
--from-literal=SMTP_FROM='admin@immich-ad.ovh'
kubectl -n vaultwarden get pods
```

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vaultwarden-data
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: vaultwarden-data
local:
path: /storage/vaultwarden
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-vaultwarden-data
namespace: vaultwarden
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: vaultwarden-data

View File

@@ -0,0 +1,30 @@
{{- define "vaultwarden.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "vaultwarden.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s" (include "vaultwarden.name" .) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "vaultwarden.labels" -}}
app.kubernetes.io/name: {{ include "vaultwarden.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ printf "%s-%s" .Chart.Name .Chart.Version | quote }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "vaultwarden.selectorLabels" -}}
app.kubernetes.io/name: {{ include "vaultwarden.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}

View File

@@ -0,0 +1,38 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "vaultwarden.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "vaultwarden.fullname" . }}
template:
metadata:
labels:
app: {{ include "vaultwarden.fullname" . }}
spec:
containers:
- name: vaultwarden
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
env:
- name: ADMIN_TOKEN
value: {{ .Values.vaultwarden.adminToken | quote }}
- name: SIGNUPS_ALLOWED
value: {{ .Values.vaultwarden.signupAllowed | quote }}
envFrom:
- secretRef:
name: vaultwarden-smtp # SMTP secret
- secretRef:
name: vaultwarden-db-url # Database URL secret
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ default (printf "%s-data" (include "vaultwarden.fullname" .)) .Values.persistence.existingClaim }}

View File

@@ -0,0 +1,55 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "vaultwarden.fullname" . }}
labels:
{{- include "vaultwarden.labels" . | nindent 4 }}
{{- if .Values.ingress.annotations }}
annotations:
{{- toYaml .Values.ingress.annotations | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.vaultwarden.domain | quote }}
http:
paths:
- path: /
{{- if semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion }}
pathType: Prefix
{{- end }}
backend:
service:
name: {{ include "vaultwarden.fullname" . }}
port:
number: {{ .Values.service.port }}
{{- range .Values.ingress.extraHosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
service:
name: {{ include "vaultwarden.fullname" . }}
port:
number: {{ .Values.service.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "vaultwarden.fullname" . }}
spec:
type: {{ .Values.service.type }}
selector:
app: {{ include "vaultwarden.fullname" . }}
ports:
- name: http
port: {{ .Values.service.port }}
targetPort: 80

65
vaultwarden/values.yaml Normal file
View File

@@ -0,0 +1,65 @@
image:
repository: docker.io/vaultwarden/server
tag: 1.35.3
pullPolicy: IfNotPresent
replicaCount: 1
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.entrypoints: websecure
hosts:
- host: vaultwarden.immich-ad.ovh
paths:
- path: /
pathType: Prefix
tls:
- secretName: vaultwarden-tls
hosts:
- vaultwarden.immich-ad.ovh
# Persist vaultwarden data (attachments, icon cache, etc.)
persistence:
enabled: true
existingClaim: pvc-vaultwarden-data
vaultwarden:
# REQUIRED for secure cookies, web vault, etc.
domain: "vaultwarden.immich-ad.ovh"
signupAllowed: false
adminToken: "x4FBfkK4f1wDCuXWQdX9"
# Database config
database:
name: vaultwarden
user: vaultwarden
# Bitnami PostgreSQL subchart values
postgresql:
enabled: true
image:
registry: docker.io
repository: bitnami/postgresql
tag: latest
auth:
username: vaultwarden
database: vaultwarden
# Upgrade-safe: point to an existing secret you create once
existingSecret: vaultwarden-postgresql-auth
secretKeys:
adminPasswordKey: postgres-password
userPasswordKey: password
primary:
persistence:
enabled: true
existingClaim: pvc-vaultwarden-data # bind to precreated PVC if you want