And I’m not even talking about other CMS platforms. Those who know me better are aware of my liking for self-hosted services like NextCloud (since ownCloud became unusable for the average user). But lately, I discovered even more than WordPress.
How it started
It all started with the purchase of an EcoFlow power station. We already had our office infrastructure like modems and NAS backed by a UPS. Unfortunately, their batteries died too quickly in the coastal setup. Whether it’s humidity, salty air, or heat — I don’t know, but the average UPS battery lasted about 6 to 12 months. And at best, it would cover a quarter of an hour of outage. If you triggered the signal to power down the NAS via USB, at least the modems could survive for almost half an hour. Still, far from ideal, given that KPLC is capable of knocking you offline for a whole day. The EcoFlow River 2 provides power for up to 10 hours to the modems, which means a normal office day is easily covered. But it doesn’t offer the same shutdown signaling as a UPS.
Unless you connect it to Home Assistant. At this point, my learning curve steepened. I had already watched several videos on YouTube, especially the “Hardware Heaven” channel, where a guy tinkers with old hardware to create home servers. I used to enjoy doing the same myself. Decades ago, I turned an old Igel Thin Client into a PBX — getting an appliance that usually cost around 1,200 € for less than a hundred Euros. Same software, same functionality — just cheaper, with sufficient hardware. I had also picked up a really cheap Dell Thin Client that I always wanted to use to install Pi-hole. That attempt failed and the project was abandoned. Now, with the additional need for Home Assistant, why not go all-in and virtualize both on my old Mac mini 2012, which had been idle since being replaced by a MacBook Air?
Make use of what you have
The mini’s specs were encouraging too: an 8-core Intel i7, 16 GB of RAM, and a Fusion Drive that had to be broken into its components: a 512 GB SSD and a 2 TB HDD. That would make a reasonable base for a home server. I never considered using it for Pi-hole alone — it seemed overkill. It’s not the most energy-efficient device, drawing about 30 W on average, but it would get the job done. So far I had only heard of Proxmox as a virtualization environment. Nonetheless, with some help from AI to structure my ideas and guide me through the correct steps, I managed to set up this home server.
The last steps on macOS were to break up the Fusion Drive. All my data had already been migrated to the MacBook and backed up to the QNAP NAS — so there was nothing to lose. Once separated, I created a bootable USB stick with Debian 12 on my MacBook, which immediately fired up the mini and began a basic, command-line–only install of Debian 12. With that base setup, Proxmox could be installed on top, replacing the Debian kernel. This step — running Debian instead of installing Proxmox directly on bare metal — was necessary due to Apple’s specific hardware and boot mechanisms.
Learn what you don’t know
Proxmox now hosts various virtual machines, all running on shared hardware, but each receiving dedicated resources. It’s like stacking different boxes with various specs in your rack — only virtually, and without spending lots of money on soon-to-be-outdated hardware. Some preliminary tasks were required before setting up virtual machines in Proxmox. To best use the large HDD, I dedicated it to backups, container templates, and ISO images for VMs. A small mail-forwarder ensured that status emails could be sent out.
I quickly learned how to upgrade my machines. Watching their metrics, one needed more RAM, another ran out of disk space. Just stop them, adjust settings, restart — done. Sure, I’m simplifying a bit, but only a bit. It’s still easier than opening up boxes and swapping physical components.
Here’s what I have
After one week, I now have five virtual machines up and running:
Pi-hole
A container (the “lighter” form of Proxmox virtualization) running Pi-hole. What an improvement for surfing! Pi-hole filters ads at the network level, reducing bandwidth and significantly cleaning up websites. Just load any random news site to see how cluttered they are. Not so with Pi-hole. It acts as your DNS server on the internal network. Its IP is broadcast via DHCP by my router as the primary DNS, while it in turn uses the router (which gets DNS from the provider) and OpenDNS as fallback. Unfortunately, my router allows only one DNS entry in DHCP settings. This means if Pi-hole goes down, DNS resolution fails.
DNS-Relay
So: container two runs a very simple DNS relay for failover. As long as Pi-hole is online, it’s the main DNS. If it fails, the router takes over. The DHCP DNS entry was updated to point to the relay instead of Pi-hole. This setup works. A second task for this container might be internal name mapping, as remembering all those IPs and ports is a pain.
Update 24.6.2025: The idea of having a simple nameserver for the internal network didn’t work out with the DNS-Relay, but just with the Pi-hole. It even offers the setting right in the GUI. Once you make sure your domain name is configured and expanded in the DNS settings you simply need to add the various records in the local DNS settings. Yet another challenge to get the same working in the VPN.
Home Assistant
The one that started it all: Home Assistant. It’s a dashboard to monitor (almost) all smart devices and read various metrics. Going further, it can build automations. If EcoFlow’s incoming AC drops to zero (KPLC failed), Home Assistant can shut down the QNAP NAS to preserve battery life for the modems and give me a notification about it. I realized I had more smart devices than I thought. The router shows up, the door lock reports battery and status, weather info can be included in rules, and even my proximity to the office can be tracked via iPhone location. So far, I’m just scratching the surface: maybe auto-unlock the office when I approach, no need for a code or RFID card. Eventually, I might link the smart lock to WooCommerce to automise the order process for the CoWorking.
Plex Server
The QNAP has several roles. It backs up our Macs and runs a Plex server. Sadly, the TS-669L is outdated. It no longer receives firmware updates, and Plex updates won’t install on it either. So, how to move Plex to a VM while still using the QNAP’s storage and RAID 5 security? A hybrid setup: the data remains on QNAP, attached via NFS to a VM running Plex. Logic in the VM unmounts the volume and shuts down Plex when the QNAP powers off — which happens nightly. I don’t watch movies after midnight, and backups resume at 9 a.m. when we’re back in the office. Why should it consume power in between? The same logic applies to power outages when the QNAP shuts down to save EcoFlow battery, consequently the Plex VM would get the media supplied.
Tailscale VPN
As external access is currently limited, I need a VPN. Tailscale, a free VPN service, should solve that. So far, it only lets me access the VM it runs on — not the other services. Still a work in progress.
Update 24.6.2025: Just a day later and everything is up and running. It’s really simple and convinces my further that my choice of Tailscale was right: just make the VM a subnet router and all other machines are accessible by their internal IP address. Which means I can monitor and maintain my services even from home, access my Plex movie library and have the same smooth surfing experience with filtered ads as in the office. Neat!
Reverse Proxy
The next work in progress is to use a reverse proxy to forward the domain names configured in Pi-hole to the respective ports and/or directories. Once completed (some already work) just type plex.tld instead of x.x.x.x:32400/web/index to immediately connect to your Plex server (note: .tld and x.x.x.x are just obfuscations of my real settings).
Conclusion
The system is healthy and mostly just above idle. CPU peaks at 5% and averages under 2%. About 7 GB of the 16 GB RAM are in use. Both SSD and HDD are under 5% capacity. Plenty of resources left to run more services and stretch my learning curve further.