Home Infrastructure Overview

Welcome to a comprehensive overview of my home infrastructure. In this post, I’ll cover various aspects including physical hardware, software running on different servers, VMs, and Kubernetes clusters.

Physical Things

Location and Hosting

  • Basement All physical servers are located in the basement for optimal cooling and accessibility.
  • Hetzner Cloud Remote services hosted on Hetzner’s cloud infrastructure.

Physical Servers

Server NameBrand/TypeCPURAMDisk
R620Dell PowerEdge R6202x Xeon E5-2695 v2 (24C/48T total)256 GB DDR3 ECCN/A
R730xdDell PowerEdge R730xd2x Xeon E5-2640 v4 (20C/40T total)256 GB DDR4 ECC2x 480GB Samsung Enterprise SSD (boot), 6x 960GB Kingston DC500 Mixed use SSD, 4x Seagate Exos X16 16TB HDD, 4x 1TB Sabrent gen 3 NVMe SSD
R320Dell PowerEdge R3201x Xeon something (10C/20T total)96GB DDR3 ECC1x 120GB Kingston DC500 Mixed use SSD, 4x 4TB HDD
MD3200Dell PowerVault MD3200N/AN/A12x 3TB HGST/Hitachi/etc HDD
d1Dell Optiplex 30501x i5-6500T (4C/4T total)8GB DDR3256 GB NVMe SSD
d2Dell Optiplex 30501x i3-6100T (2C/4T total)8GB DDR3256 GB NVMe SSD
raspi-1Raspberry Pi 5Broadcom BCM2712 (4C/4T total)8GB LPDDR3256GB NVMe SSD
kpi-1Raspberry Pi 5Broadcom BCM2712 (4C/4T total)8GB LPDDR3256GB NVMe SSD
kpi-2Raspberry Pi 5Broadcom BCM2712 (4C/4T total)8GB LPDDR3256GB NVMe SSD
kpi-3Raspberry Pi 5Broadcom BCM2712 (4C/4T total)8GB LPDDR3256GB NVMe SSD

Network Stack

  • Primary Router / Firewall / DHCP: Mikrotik RB4011iGS+
  • Switches: Dell PowerConnect 5548, Mikrotik CRS309-1G-8S+, Mikrotik CRS112-8P-4S
  • Wi-Fi Access Points: 2x TP-Link Omada EAP660 HD (managed by a TP-Link OC200)
  • DNS/DHCP: 2x Technitium DNS + 1x Pi-Hole as upstream

Power

  • UPS: Dell branded APC UPS, 1920W, ~30-60min runtime under current load (~100+ if the R730xd is shut down on power-loss detection)

Software Layer

What’s Running on Which Physical Server

  • R620: Proxmox VE, offline for now
  • R730xd: Proxmox VE, running the VMs for “Basement vCPU Cluster” (bcc)
  • R320: Proxmox Backup Server, offline for now
  • MD3200: External SAS storage array, offline for now, only turned on for occasional backups
  • d1: Proxmox VE, clustered together with d2, running the primary DNS server
  • d2: Proxmox VE, clustered together with d1, running the secondary DNS server and an XP VM
  • raspi-1: Raspbian, running HomeAssistant, Grafana, InfluxDB v2, Prometheus
  • kpi-1..3: Raspbian, used to run a k3s cluster, offline for now (until Talos linux fully supports the pi 5)

Virtual Machines (VMs)

R730xd

VM NamePurposeOSSoftware OverviewResources
g-runnerGitlab Runner (for infra only)openSUSE Leap 15.6Runs gitlab-runner with the docker executor to handle infrastructure pipelines (e.g. opentofu)4 vCPU, 4GB RAM, 32GB disk
harborHarbor OCI RegistryopenSUSE Leap 15.6Runs Harbor for proxying/caching container image pulls for the local k8s cluster4 vCPU, 8GB RAM, 120GB disk
bcc-ctrl-1K8S control-planeTalos LinuxK8s control-plane4 vCPU, 8GB RAM, 64GB disk
bcc-ctrl-2K8S control-planeTalos LinuxK8s control-plane4 vCPU, 8GB RAM, 64GB disk
bcc-ctrl-3K8S control-planeTalos LinuxK8s control-plane4 vCPU, 8GB RAM, 64GB disk
bcc-gpu-1K8S worker w/ gpuTalos LinuxK8s worker node with a gpu8 vCPU, 32GB RAM, 128GB disk, NVIDIA RTX A2000 12GB
bcc-gpu-2K8S worker w/ gpuTalos LinuxK8s worker node with a gpu8 vCPU, 32GB RAM, 128GB disk, NVIDIA Quadro P1000
bcc-worker-1K8s workerTalos LinuxK8s worker node8 vCPU, 32GB RAM, 128GB disk
bcc-worker-2K8s workerTalos LinuxK8s worker node8 vCPU, 32GB RAM, 128GB disk
bcc-worker-3K8s workerTalos LinuxK8s worker node8 vCPU, 32GB RAM, 128GB disk
bcc-worker-4K8s workerTalos LinuxK8s worker node8 vCPU, 32GB RAM, 128GB disk
bcc-longhorn-1K8s storageTalos LinuxK8s storage node for longhorn4 vCPU, 8GB RAM, 128GB Disk, 1TB NVMe passthrough
bcc-longhorn-2K8s storageTalos LinuxK8s storage node for longhorn4 vCPU, 8GB RAM, 128GB Disk, 1TB NVMe passthrough
bcc-longhorn-3K8s storageTalos LinuxK8s storage node for longhorn4 vCPU, 8GB RAM, 128GB Disk, 1TB NVMe passthrough
bcc-longhorn-4K8s storageTalos LinuxK8s storage node for longhorn4 vCPU, 8GB RAM, 128GB Disk, 1TB NVMe passthrough
d1 & d2

VM NamePurposeOSSoftware OverviewResources
ns1Primary DNSopenSUSE Leap 15.6Runs Technitum DNS in docker2 vCPU, 2GB RAM, 10GB disk
ns2Secondary DNSopenSUSE Leap 15.6Runs Technitum DNS in docker2 vCPU, 2GB RAM, 10GB disk
xpOld softwareWindows XP SP3VM virtualized from an old failing Pentium 3 PC, runs an old version of Vivid WorkshopData
Hetzner

VM NamePurposeOSSoftware OverviewResources
omniSiderolabs OmniopenSUSE Leap 15.6Runs siderolabs omni in dockerCAX11, 2vCPU, 4GB RAM, 40GB disk
gitlabGitLabopenSUSE Leap 15.6Runs the omnibus version of GitLab CECPX31, 4vCPU, 8GB RAM, 160GB disk
hcc-1K8s nodeTalos LinuxK8s node (control-plane, worker and storage all in one)CX32, 4 vCPU, 8GB RAM, 80GB disk
hcc-2K8s nodeTalos LinuxK8s node (control-plane, worker and storage all in one)CX32, 4 vCPU, 8GB RAM, 80GB disk
hcc-3K8s nodeTalos LinuxK8s node (control-plane, worker and storage all in one)CX32, 4 vCPU, 8GB RAM, 80GB disk
Other Hetzner resources

  • Load Balancer for the hcc cluster
  • Object Storage (buckets for everything): Longhorn backups, Loki logs, Omni / k8s etcd backups

Kubernetes Clusters

Home Cluster (BCC)
Hetzner Cluster (HCC)

Conclusion

This overview gives a snapshot of my home infrastructure, covering both physical and software aspects. Future posts will delve deeper into specific components like Kubernetes configurations and VM setups.

Feel free to reach out if you have any questions or want more details on any part of the setup!