Compare commits

...

21 Commits

Author SHA1 Message Date
Evrard Van Espen
6e417e4ba4 Update firewalld commands
All checks were successful
Build and deploy / Build (push) Successful in 4m35s
2025-11-27 11:03:03 +00:00
Evrard Van Espen
301d8a2f07 Add post about my homelab
All checks were successful
Build and deploy / Build (push) Successful in 2m41s
2025-11-23 11:57:54 +00:00
Evrard Van Espen
2feb0d6747 Add ability to handle svg files 2025-11-23 11:57:41 +00:00
397159563d Merge pull request 'Add CI' (#1) from testing-ci into main
All checks were successful
Build and deploy / Build (push) Successful in 2m14s
Reviewed-on: #1
2025-11-20 20:28:20 +00:00
Evrard Van Espen
dd1b3accb4 DONE
All checks were successful
Build and deploy / Build (push) Successful in 2m19s
2025-11-20 20:24:33 +00:00
Evrard Van Espen
7968ceed33 DONE
All checks were successful
Build and deploy / Build (push) Successful in 2m20s
2025-11-20 20:20:44 +00:00
Evrard Van Espen
54ffd24610 DONE
All checks were successful
Build and deploy / Build (push) Successful in 1m52s
2025-11-20 20:08:45 +00:00
Evrard Van Espen
ad55d7018d DONE
Some checks failed
Build and deploy / Build (push) Failing after 2m12s
2025-11-20 20:06:28 +00:00
Evrard Van Espen
c696efbac3 DONE
All checks were successful
Build and deploy / Build (push) Successful in 2m7s
2025-11-20 19:56:09 +00:00
Evrard Van Espen
e2eeafacc8 Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 2m13s
2025-11-20 19:50:39 +00:00
Evrard Van Espen
604633fe7f Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 2m6s
2025-11-20 19:40:15 +00:00
Evrard Van Espen
00da78dd87 Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 1m50s
2025-11-20 19:34:46 +00:00
Evrard Van Espen
bea4784dc6 Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 2m6s
2025-11-20 19:30:46 +00:00
Evrard Van Espen
ce09964bc1 Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 2m5s
2025-11-20 19:11:04 +00:00
Evrard Van Espen
374909497a Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 2m7s
2025-11-20 19:03:43 +00:00
Evrard Van Espen
1e1f25f3ff Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 32s
2025-11-20 18:57:39 +00:00
Evrard Van Espen
d05ba9903c Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 27s
2025-11-20 18:56:52 +00:00
Evrard Van Espen
4f49fcfa48 Testing
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 27s
2025-11-20 18:56:03 +00:00
Evrard Van Espen
e86e7287be Testing
Some checks failed
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 28s
2025-11-20 18:55:18 +00:00
Evrard Van Espen
5fb92608d2 Testing 2025-11-20 18:55:08 +00:00
gitea
1ff1987e63 Validating repository write permission 2025-11-20 18:08:16 +00:00
10 changed files with 799 additions and 54 deletions

View File

@@ -1,35 +1,36 @@
name: Gitea Actions Demo name: Build and deploy
run-name: ${{ gitea.actor }} is testing out Gitea Actions 🚀 run-name: 🚀
on: [push] on: [push]
jobs: jobs:
Explore-Gitea-Actions: Build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
# - name: Login to container registry - name: Login to container registry
# uses: https://github.com/docker/login-action@v3 uses: https://github.com/docker/login-action@v3
# with: with:
# registry: https://git.vanespen.dev registry: https://git.vanespen.dev
# username: ${{ secrets.USERNAME }} username: ${{ secrets.USERNAME }}
# password: ${{ secrets.PASSWORD }} password: ${{ secrets.PASSWORD }}
- name: Check out repository code - name: Check out repository code
uses: actions/checkout@v4 uses: actions/checkout@v4
# - name: Set up Docker Buildx - name: Set up Docker Buildx
# uses: https://github.com/docker/setup-buildx-action@v3 uses: https://github.com/docker/setup-buildx-action@v3
# - name: Build and push - name: Build and push
# uses: https://github.com/docker/build-push-action@v6 uses: https://github.com/docker/build-push-action@v6
# with: with:
# context: . context: .
# push: true push: true
# pull: true pull: true
# no-cache: true no-cache: true
# tags: "git.vanespen.dev/evanespen/blog:latest" tags: "git.vanespen.dev/evanespen/blog:latest"
- name: Setup Kubectl - name: Setup Kubectl
run: | run: |
env
mkdir ~/.kube mkdir ~/.kube
echo "$KUBECONFIG" > ~/.kube/config echo '${{ secrets.KUBECONFIG }}' > ~/.kube/config
cat ~/.kube/config export COMMIT_REF=$(git rev-parse HEAD)
# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" echo $COMMIT_REF
# install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# /usr/local/bin/kubectl apply --validate=false -f argo.yaml install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
sed -i "s/COMMIT_REF/$COMMIT_REF/g" argo.template.yaml
/usr/local/bin/kubectl apply --validate=false -f argo.template.yaml

43
argo.template.yaml Normal file
View File

@@ -0,0 +1,43 @@
---
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: blog
namespace: argocd
spec:
description: Project for the blog application
sourceRepos:
- https://git.vanespen.dev/evanespen/blog
destinations:
- namespace: blog
server: https://kubernetes.default.svc
clusterResourceWhitelist:
- group: "*"
kind: "*"
namespaceResourceWhitelist:
- group: "*"
kind: "*"
syncWindows: []
roles: []
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: blog-argo
namespace: argocd
spec:
project: blog
source:
repoURL: "https://git.vanespen.dev/evanespen/blog"
targetRevision: COMMIT_REF
path: "k8s"
destination:
server: "https://kubernetes.default.svc"
namespace: blog
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View File

@@ -1,18 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: blog-argo
namespace: argocd
spec:
project: default
source:
repoURL: "https://git.vanespen.dev/evanespen/blog"
path: "k8s"
targetRevision: developer
destination:
server: "https://kubernetes.default.svc"
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true

47
k8s/deploy.yaml Normal file
View File

@@ -0,0 +1,47 @@
---
apiVersion: v1
kind: Pod
metadata:
name: blog-pod
namespace: blog
labels:
app: blog-pod
spec:
containers:
- name: blog-container
image: git.vanespen.dev/evanespen/blog:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: blog-service
namespace: blog
spec:
selector:
app: blog-pod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: blog-ingressroute
namespace: blog
spec:
entryPoints:
- websecure
routes:
- match: Host(`vanespen.dev`)
kind: Rule
services:
- name: blog-service
port: 80
tls:
certResolver: letsencrypt_dns

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: blog-pod
spec:
containers:
- name: blog-container
image: git.vanespen.dev/evanespen/blog:latest
ports:
- containerPort: 80

View File

@@ -21,7 +21,7 @@ func copyMedias() error {
} }
filepath.WalkDir("posts/", func(s string, d fs.DirEntry, err error) error { filepath.WalkDir("posts/", func(s string, d fs.DirEntry, err error) error {
if filepath.Ext(s) == ".jpg" || filepath.Ext(s) == ".jpeg" || filepath.Ext(s) == ".png" || filepath.Ext(s) == ".mp4" { if filepath.Ext(s) == ".jpg" || filepath.Ext(s) == ".jpeg" || filepath.Ext(s) == ".png" || filepath.Ext(s) == ".mp4" || filepath.Ext(s) == ".svg" {
newPath := strings.ReplaceAll(s, "posts/", "build/medias/") newPath := strings.ReplaceAll(s, "posts/", "build/medias/")
if _, err := os.Stat(newPath); err == nil { if _, err := os.Stat(newPath); err == nil {

BIN
posts/homelab.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

674
posts/homelab.org Normal file
View File

@@ -0,0 +1,674 @@
#+TITLE: Mon homelab
#+DATE: 2025-11-23T00:00:00Z
#+DRAFT: false
#+AUTHOR: Evrard Van Espen
#+DESCRIPTION: Documentation à propos de mon homelab (cluster kubernetes)
#+SLUG: homelab
#+TAGS: système, linux, homelab, kubernetes
#+HERO: homelab.jpg
* Rôle d'/homelab/
Que ce soit pour tester de nouvelles technologies, automatiser des déploiements ou maîtriser les outils /DevOps/, un /homelab/ est un terrain de jeu idéal.
Mon /homelab/ me permet dexpérimenter en toute liberté, sans craindre de casser un environnement de production.
Cest un espace dapprentissage où chaque erreur devient une leçon, et chaque réussite, une compétence de plus.
Pour les administrateurs système ou les passionnés de /DevOps/, disposer dun tel laboratoire à domicile est une façon concrète de progresser, dinnover et de rester à la pointe des pratiques /IT/.
Découvrez comment le mien est organisé et ce quil mapporte au quotidien.
* La machine
Mon /homelab/ est constitué d'une machine /Fedora/ avec :
- /Ryzen 5 1600X/ (6 cœurs matériel, 12 cœurs virtuels);
- 64Go de RAM;
- /SSD/ de 500Go pour le système;
- /RAID/ 10 de 8To pour le reste.
* Architecture
Afin de me donner le plus de libertés possible, /Incus/ est installé sur la machine /Fedora/, me permettant de créer des machines virtuelles et des conteneurs afin de ne pas effectuer les tests directement sur la machine elle-même.
Parmi ces machines virtuelles, trois sont importantes, il s'agit des machines virtuelles contenant le /cluster/ /Kubernetes/.
#+ATTR_HTML: :style width: 50%
[[machine.drawio.svg]]
** Services annexes
Un serveur /NFS/ est aussi en place sur la machine "hôte" afin de fournir du stockage à /Kubernetes/, nous y reviendrons plus tard.
* Mise en place du /cluster/ /K8s/
** Création des machines virtuelles (à la main)
Créer un nouveau projet /Incus/ pour /Kubernetes/
#+BEGIN_SRC
incus project create kubernetes
incus project switch kubernetes
#+END_SRC
Créer un nouveau profil pour les noeuds de /Kubernetes/
#+BEGIN_SRC
incus profile create kubenode
#+END_SRC
#+BEGIN_SRC yaml
name: kubenode
description: Profile for kubernetes cluster node
project: kubernetes
config:
boot.autostart: "true"
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
security.nesting: "true"
security.privileged: "true"
limits.cpu: "4"
limits.memory: "6GiB"
cloud-init.vendor-data: |
#cloud-config
users:
- name: kubeadmin
gecos: kubeadmin
sudo: ALL=(ALL) NOPASSWD:ALL
groups: wheel, root
lock_passwd: false
ssh_authorized_keys:
- ssh-ed25519 ... evrardve@hostname
passwd: "<hash linux mot de passe>"
packages:
- openssh-server
runcmd:
- systemctl enable --now sshd
- systemctl restart sshd
#+END_SRC
Ce profil permet de mutualiser certains éléments de configuration entre les machines virtuelles qui consitueront le /cluster/ /K8s/ tels que la mémoire vive, le nombre de /CPUs/ ainsi qu'un bloc /cloud-init/.
Ce bloc /cloud-init/ permet de configurer l'utilisateur admin de la VM et d'installer le serveur /ssh/.
#+BEGIN_WARNING
Ne pas oublier le commentaire =#cloud-init= en haut, sinon /cloud-init/ ne prendra pas en compte la configuration !
#+END_WARNING
Puis créer les 3 machines virtuelles
#+BEGIN_SRC
incus launch images:fedora/43/cloud kube-main \
--vm \
--profile kubenode \
--project kubernetes \
--device eth0,nic,network=incusbr0,name=eth0,ipv4.address=10.1.1.100
incus launch images:fedora/43/cloud kube-worker1 \
--vm \
--profile kubenode \
--project kubernetes \
--device eth0,nic,network=incusbr0,name=eth0,ipv4.address=10.1.1.101
incus launch images:fedora/43/cloud kube-worker2 \
--vm \
--profile kubenode \
--project kubernetes \
--device eth0,nic,network=incusbr0,name=eth0,ipv4.address=10.1.1.102
incus start kube-main
incus start kube-worker1
incus start kube-worker2
#+END_SRC
** Création des machines virtuelles (avec /Open Tofu/)
[[https://git.vanespen.dev/evanespen/infra-k8s/src/branch/main/tofu/main.tf][Source sur /git/]]
#+BEGIN_SRC tf
terraform {
required_providers {
incus = {
source = "lxc/incus"
version = "0.3.1"
}
}
}
provider "incus" {
}
resource "incus_project" "kubernetes" {
name = "kubernetes"
description = "Kubernetes project"
config = {
"features.storage.volumes" = false
"features.images" = false
"features.profiles" = false
"features.storage.buckets" = false
}
}
locals {
ssh_public_key = trimspace(file("~/.ssh/id_ed25519.pub"))
}
locals {
kubeadmin_password_hash = trimspace(file("./kubeadmin_password_hash"))
}
data "template_file" "cloud_init" {
template = file("${path.module}/files/cloud-init.yaml")
vars = {
ssh_public_key = local.ssh_public_key
}
}
resource "incus_profile" "kubenode" {
name = "kubenode"
project = "kubernetes"
description = "Kubernetes lab node"
depends_on = [
incus_project.kubernetes
]
config = {
"security.nesting" = "true"
"security.privileged" = "true"
"limits.cpu" = "4"
"limits.memory" = "6GiB"
"limits.memory.swap" = "false"
"boot.autostart" = "true"
"cloud-init.vendor-data" = templatefile(
"${path.module}/files/cloud-init.yaml", { ssh_public_key = local.ssh_public_key, kubeadmin_password_hash = local.kubeadmin_password_hash }
)
}
device {
name = "eth0"
type = "nic"
properties = {
network = "incusbr0"
name = "eth0"
}
}
device {
name = "root"
type = "disk"
properties = {
pool = "default"
path = "/"
}
}
}
resource "incus_instance" "kube-main" {
name = "kube-main"
type = "virtual-machine"
image = "images:fedora/43/cloud"
profiles = [incus_profile.kubenode.name]
project = incus_project.kubernetes.name
depends_on = [
incus_profile.kubenode
]
device {
name = "eth0"
type = "nic"
properties = {
network = "incusbr0"
name = "eth0"
"ipv4.address" = "10.1.1.100"
}
}
}
resource "incus_instance" "kube-worker1" {
name = "kube-worker1"
type = "virtual-machine"
image = "images:fedora/43/cloud"
profiles = [incus_profile.kubenode.name]
project = incus_project.kubernetes.name
depends_on = [
incus_profile.kubenode
]
device {
name = "eth0"
type = "nic"
properties = {
network = "incusbr0"
name = "eth0"
"ipv4.address" = "10.1.1.101"
}
}
}
resource "incus_instance" "kube-worker2" {
name = "kube-worker2"
type = "virtual-machine"
image = "images:fedora/43/cloud"
profiles = [incus_profile.kubenode.name]
project = incus_project.kubernetes.name
depends_on = [
incus_profile.kubenode
]
device {
name = "eth0"
type = "nic"
properties = {
network = "incusbr0"
name = "eth0"
"ipv4.address" = "10.1.1.102"
}
}
}
#+END_SRC
* Installation de /Kubernetes/
J'ai effectué l'installation de /Kubernetes/ avec un /playbook/ /Ansible/
#+BEGIN_WARNING
/SELinux/ doit être désactivé sur les machines virtuelles pour que /K8s/ puisse gérer les règles /IPTables/ de ces dernières.
#+END_WARNING
#+BEGIN_WARNING
/SELinux/ doit être désactivé sur la machine hôte pour que /K8s/ puisse créer des volumes en utilisant la /storage class/ /NFS/.
#+END_WARNING
** Installation de base
[[https://git.vanespen.dev/evanespen/infra-k8s/src/branch/main/ansible/01_install.yaml][Source sur /git/]]
#+BEGIN_SRC yaml
- name: Install kubernetes
become: true
hosts: incus-k8s-nodes
tasks:
- name: Disable SELinux
ansible.posix.selinux:
state: disabled
- name: Install nfs-utils
ansible.builtin.dnf:
name: nfs-utils
state: present
update_cache: true
- name: Check if firewalld is installed
ansible.builtin.command:
cmd: rpm -q firewalld
failed_when: false
changed_when: false
register: firewalld_check
- name: Disable firewall
ansible.builtin.systemd_service:
name: firewalld
state: stopped
enabled: false
masked: true
when: firewalld_check.rc == 0
- name: Install iptables and iproute-tc
ansible.builtin.dnf:
name: "{{ item }}"
state: present
update_cache: true
loop:
- iptables
- iproute-tc
- name: Configure network
block:
- name: Configure kernel modules
ansible.builtin.copy:
src: files/etc_modules-load.d_k8s.conf
dest: /etc/modules-load.d/k8s.conf
owner: root
group: root
mode: "0644"
- name: Enable overlay and br_netfilter module
community.general.modprobe:
name: "{{ item }}"
state: present
loop:
- overlay
- br_netfilter
- name: Configure sysctl
ansible.posix.sysctl:
name: "{{ item.key }}"
value: "{{ item.value }}"
state: present
reload: true
loop:
- { key: net.bridge.bridge-nf-call-iptables, value: 1 }
- { key: net.bridge.bridge-nf-call-ip6tables, value: 1 }
- { key: net.ipv4.ip_forward, value: 1 }
- name: Install kubernetes
ansible.builtin.dnf:
name: "{{ item }}"
state: present
loop:
- cri-o1.34
- kubernetes1.34
- kubernetes1.34-kubeadm
- kubernetes1.34-client
- name: Start and enable cri-o
ansible.builtin.systemd_service:
name: crio
state: started
enabled: true
- name: Start and enable kubelet
ansible.builtin.systemd_service:
name: kubelet
state: started
enabled: true
- name: Check if kubeadm_init_result.txt exists on kube-main
when: inventory_hostname == "kube-main"
ansible.builtin.stat:
path: /root/kubeadm_init_result.txt
register: kubeadm_init_file_check
failed_when: false
- name: Run init command
when: inventory_hostname == "kube-main" and kubeadm_init_file_check.stat.exists == false
ansible.builtin.shell:
cmd: "kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/crio/crio.sock > /root/kubeadm_init_result.txt"
register: kubeadm_init_result
changed_when: kubeadm_init_result.rc == 0
failed_when: kubeadm_init_result.rc != 0
- name: AFTER INIT -- Check if kubeadm_init_result.txt exists on kube-main
when: inventory_hostname == "kube-main"
ansible.builtin.stat:
path: /root/kubeadm_init_result.txt
register: kubeadm_init_file_check
- name: Read init result file content
when: inventory_hostname == "kube-main" and kubeadm_init_file_check.stat.exists == true
ansible.builtin.command:
cmd: cat /root/kubeadm_init_result.txt
register: kubeadm_init_file_content
- name: Retrieve kubeadm_init_file_content for other tasks
ansible.builtin.set_fact:
kubeadm_init_file_content: "{{ kubeadm_init_file_content }}"
run_once: true
delegate_to: localhost
- name: Set join command from file content
ansible.builtin.set_fact:
join_command: >-
{{
(kubeadm_init_file_content.stdout_lines[-2] +
kubeadm_init_file_content.stdout_lines[-1])
| to_json()
| replace("\\", '')
| replace("\t", '')
| replace('"', '')
}}
- name: Display join command on worker nodes
when: inventory_hostname in ["kube-worker1", "kube-worker2"]
ansible.builtin.debug:
var: join_command
- name: Check if kubeadm join was already runned
when: inventory_hostname in ["kube-worker1", "kube-worker2"]
ansible.builtin.stat:
path: /var/log/kubeadm_join.log
register: kubeadm_join_file_check
- name: Join worker nodes to the cluster
when: inventory_hostname in ["kube-worker1", "kube-worker2"] and kubeadm_join_file_check.stat.exists == false
ansible.builtin.command:
cmd: "{{ join_command }} >> /var/log/kubeadm_join.log"
register: kubeadm_join_result
changed_when: kubeadm_join_result.rc == 0
failed_when: kubeadm_join_result.rc != 0
- name: Create .kube directory on localhost
ansible.builtin.file:
path: ~/.kube
state: directory
mode: "0755"
- name: Fetch admin.conf from kube-main
when: inventory_hostname == "kube-main"
ansible.builtin.fetch:
src: /etc/kubernetes/admin.conf
dest: ~/.kube/config
flat: true
#+END_SRC
** Installation du réseau et du stockage /NFS/
[[https://git.vanespen.dev/evanespen/infra-k8s/src/branch/main/ansible/02_post_install.yaml][Source sur /git/]]
#+BEGIN_SRC yaml
- name: Post install
hosts: localhost
vars_files:
- config/config_vars.yaml
tasks:
- name: Apply network overlay
delegate_to: localhost
kubernetes.core.k8s:
state: present
src: https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
- name: Add CSI driver helm repo
delegate_to: localhost
kubernetes.core.helm_repository:
name: nfs-subdir-external-provisioner
repo_url: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
- name: Install CSI driver
delegate_to: localhost
kubernetes.core.helm:
name: nfs-subdir-external-provisioner
chart_ref: nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
update_repo_cache: true
create_namespace: false
release_namespace: kube-system
values:
storageClass:
name: nfs-csi
defaultClass: true
nfs:
server: "{{ nfs.server }}"
path: "{{ nfs.path }}"
#+END_SRC
** Installation de /Traefik/
[[https://git.vanespen.dev/evanespen/infra-k8s/src/branch/main/ansible/03_setup_traefik.yaml][Source sur /git/]]
Il s'agit ici d'installer /Traefik/.
C'est un /reverse-proxy/ qui supporte /HTTP(s)/ et /TCP/ avec une génération automatique de certificats /SSL/.
J'ai choisi d'utiliser le challenge /letsencrypt/ "/DNS/".
#+BEGIN_SRC yaml
# traefik_ovh_secrets.template.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: ovh-api-credentials
namespace: traefik
type: Opaque
data:
OVH_ENDPOINT: "{{ ovh_creds.ovh_endpoint | b64encode }}"
OVH_APPLICATION_KEY: "{{ ovh_creds.ovh_application_key | b64encode }}"
OVH_APPLICATION_SECRET: "{{ ovh_creds.ovh_application_secret | b64encode }}"
OVH_CONSUMER_KEY: "{{ ovh_creds.ovh_consumer_key | b64encode }}"
#+END_SRC
#+BEGIN_SRC yaml
# traefik.values.yaml
---
persistence:
enabled: true
size: 1G
ports:
web:
exposedPort: 80
nodePort: 30080
websecure:
exposedPort: 443
nodePort: 30443
tls:
enabled: true
ssh:
port: 2222
expose:
default: true
exposedPort: 2222
nodePort: 30022
protocol: TCP
service:
type: NodePort
ingressRoute:
dashboard:
enabled: true
matchRule: Host(`traefik.kube-main.lab`)
entryPoints:
- web
providers:
kubernetesCRD:
allowExternalNameServices: true
kubernetesGateway:
enabled: true
gateway:
listeners:
web:
namespacePolicy:
from: All
certificatesResolvers:
letsencrypt_dns_stag:
acme:
email: "{{ email }}"
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
storage: "/data/acme_dns_stag.json"
dnsChallenge:
provider: ovh
delayBeforeCheck: 0
letsencrypt_dns:
acme:
email: "{{ email }}"
storage: "/data/acme_dns.json"
dnsChallenge:
provider: ovh
delayBeforeCheck: 0
env:
- name: OVH_ENDPOINT
valueFrom:
secretKeyRef:
name: ovh-api-credentials
key: OVH_ENDPOINT
- name: OVH_APPLICATION_KEY
valueFrom:
secretKeyRef:
name: ovh-api-credentials
key: OVH_APPLICATION_KEY
- name: OVH_APPLICATION_SECRET
valueFrom:
secretKeyRef:
name: ovh-api-credentials
key: OVH_APPLICATION_SECRET
- name: OVH_CONSUMER_KEY
valueFrom:
secretKeyRef:
name: ovh-api-credentials
key: OVH_CONSUMER_KEY
logs:
general:
level: INFO
#+END_SRC
#+BEGIN_SRC yaml
# playbook.yaml
- name: Setup Traefik
vars_files:
- secrets/traefik_secrets.yaml
hosts:
- localhost
tasks:
- name: Create Traefik namespace
delegate_to: localhost
kubernetes.core.k8s:
name: traefik
api_version: v1
kind: Namespace
state: present
- name: Add Traefik chart repo
delegate_to: localhost
kubernetes.core.helm_repository:
name: traefik
repo_url: "https://traefik.github.io/charts"
- name: Setup Traefik config map for OVH DNS
delegate_to: localhost
kubernetes.core.k8s:
template: files/traefik_ovh_secret.template.yaml
state: present
- name: Setup Traefik
delegate_to: localhost
kubernetes.core.helm:
name: traefik
chart_ref: traefik/traefik
update_repo_cache: true
create_namespace: true
release_namespace: traefik
values: "{{ lookup('template', 'files/traefik_values.template.yaml') | from_yaml }}"
#+END_SRC
Ce /playbook/ installe /Traefik/ en /HTTP/, /HTTPs/ et /TCP/.
Les points /HTTP/ et /HTTPS/ seront utilisés pour exposer les services /web/ qui seront déployés dans le /cluster/.
Le point /TCP/ sera utilisé par l'instance /git/ qui sera déployée dans le /cluster/ (pour le /git/ via /SSH/).
* Redictection réseau
Il faut maintenant configurer le réseau pour les services déployés dans le /cluster/ soient accessibles depuis l'extérieur.
/Traefik/ est configuré pour exposer les ports =30080=, =30443= et =30022= sur les machines du /cluster/.
Cependant, mes machines virtuelles ne sont pas accessibles directement dans mon réseau local, il faut donc que le réseau passe par la machines hôte pour ensuite aller vers la machine virtuelle.
[[machines_reseau.drawio.svg]]
Pour cela j'ai utilisé les commandes suivantes :
#+BEGIN_SRC bash
firewall-cmd --zone=trusted --add-forward-port=port=8080:proto=tcp:toport=30080:toaddr=10.1.1.100 --permanent
firewall-cmd --zone=trusted --add-forward-port=port=8443:proto=tcp:toport=30443:toaddr=10.1.1.100 --permanent
firewall-cmd --zone=trusted --add-forward-port=port=30022:proto=tcp:toport=30022:toaddr=10.1.1.100 --permanent
firewall-cmd --reload
firewall-cmd --zone=FedoraServer --add-forward-port=port=30080:proto=tcp:toport=30080:toaddr=10.1.1.100 --permanent
firewall-cmd --zone=FedoraServer --add-forward-port=port=30443:proto=tcp:toport=30443:toaddr=10.1.1.100 --permanent
firewall-cmd --zone=FedoraServer --add-forward-port=port=30022:proto=tcp:toport=30022:toaddr=10.1.1.100 --permanent
firewall-cmd --reload
#+END_SRC
L'adresse /IP/ =10.1.1.100= correspond à la machine virtuelle =kube-main=.
Dans mon routeur j'ai configuré comme ceci :
- port =80= -> =homelab:8080=
- port =443= -> =homelab:443=
- port =22= -> =homelab:30022=
* Suite
Dans un prochain article, sera détaillée l'installation de /storage class/ permettant la persistence des données dans les /pods/ de /K8s/.
* sources
- [[https://linuxcontainers.org/incus/docs/main/cloud-init/][Documentation /cloud-init/ dans /Incus/]]
- [[https://cloudinit.readthedocs.io/en/latest/][Documentation /cloud-init/]]
- [[https://search.opentofu.org/provider/lxc/incus/latest][/Incus/ avec /Open Tofu/]]
- [[https://docs.fedoraproject.org/en-US/quick-docs/using-kubernetes-kubeadm/][Installation de /K8s/ sur /Fedora/]]
- [[https://doc.traefik.io/traefik/setup/kubernetes/][Documentation /Traefik/ pour /K8s/]]

4
posts/machine.drawio.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.5 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 16 KiB