I have something like this, in the same case. I have beefier specs b/c I use it as a daily workstation in addition to running all my stuff.
* nginx with letsencrypt wildcard so I have lots of subdomains
* No tailscale, just pure wireguard between a few family houses and for remote access
* Jellyfin for movies and TV, serving to my Samsung TV via the Tizen jellyfin app
* Mopidy holding my music collection, serving to my home stereo and numerous other speakers around the house via snapcast (raspberry pi 3 as the client)
* Just using ubuntu as the os with ZFS mirroring for NAS, serving over samba and NFS
* Home assistant for home automation, with Zigbee and Z-wave dongles
* Frigate as my NVR, recording from my security cams, doing local object detection, and sending out alerts via Home Assistant
* Forgejo for my personal repository host
* tar1090 hooked to a SDR for local airplane tracking (antenna in attic)
This all pairs nicely with my two openwrt routers, one being the main one and a dumb AP, connected via hardwire trunk line with a bunch of VLANs.
Other things in the house include an iotawatt whole-house energy monitor, a bunch of ESPs running holiday light strips, indoor and outdoor homebrew weather stations with laser particulate sensors and CO2 monitors (alongside the usual sensors), a water-main cutoff (zwave), smart bulbs, door sensors, motion sensors, sirens/doorbells, and a thing that listens for my fire alarm and sends alerts. Oh and I just flashed the pura scent diffuser my wife bought and lobotomized it so it can't talk to the cloud anymore, but I can still automate it.
I love it and have tons of fun fiddling with things.
I'll admit I've still stuck with the original FreeBSD based TrueNAS, and still am kinda bummed they swapped it. So it's interesting to see a direct example of someone for whom the new Linux based version is clearly superior. I'm long since far, far more at the "self-hosted" vs "homelab" end of the spectrum at this point, and in turn have ended up splitting my roles back out again more vs all-in-one boxes. My NAS is just a NAS, my virtualization is done via proxmox on separate hardware with storage backing to the NAS via iSCSI, and I've got a third box for OPNsense to handle the routing functions. When I first compared, the new TrueNAS was slower (presumably that is at parity or better now?) and missing certain things of the old one, but already was much easier to have Synology or Docker style or the like "apps" AIO. That didn't interest me because I didn't want my NAS to have any duty but being a NAS, but I can see how it'd be far more friendly to someone getting going, or many small business setups. A sort of better truly open and supported "open Synology" (as opposed the xpenology project).
Clearly it's worked for them here, and I'm happy to see it. Maybe the bug will truly bite them but there's so much incredibly capable hardware now available for a song and it's great to see anyone new experiment with bringing stuff back out of centralized providers in an appropriately judicious way.
Edit: I'll add as well, that this is one of those happy things that can build on itself. As you develop infrastructure, the marginal cost of doing new things drops. Like, if you already have a cheap managed switch setup and your own router setup whatever it is, now when you do something like the author describes you can give all your services IPs and DNS and so on, reverse proxy, put different things on their own VLANs and start doing network isolation that way, etc for "free". The bar of giving something new a shot drops. So I don't think there is any wrong way to get into it, it's all helpful. And if you don't have previous ops or old sysadmin experience or the like then various snags you solve along the way all build knowledge and skills to solve new problems that arise.
One thing to consider before doing the same, a computer done for homelab has a much lower consumption.
The setup mentioned in the article has an avg 600 kWh/year as opposed to a pretty solid HP EliteDesk (my own homelab) which uses 100 kWh/year. Sure you don't get a GPU but for what it is used for, you might as well use a laptop for that.
The author uses Restic + Backblaze B2 storage. I was recently setting up backups for my homebase as well, and went with Restic + BorgBase [0]. Not affiliated, just wanted to share that I think they have a nice service with a straight-forward pricing model. They are the company behind excellent Pikapods [1], which may be interesting to the homelab crowd.
With AI/LLM assistants the barrier to setting up and running a homelab is so much lower - in the past 6 months I've had Claude help me completely reconfigure the (now) 5 RPis that were sitting around severely underutilized, I have 3 running Docker, some split between home stuff, production testing and a separate management layer (along with backups that were just in the too hard basket previously). Not to forget all the documentation that goes with it. Fun times!
*most* of the homelab setup doesn't have much load so it's mostly matter of ram available and then power consumption.
many people with setup like this probably needs maybe a 4 cores low powered machine with idle consumption at ~5-10w
Clean setup. It's interesting how much attention people give to cable management and layout in tech setups.
In architectural lighting projects we often think in a similar way about fixture placement, wiring access and maintenance because poor planning becomes very visible once a space is finished.
I never understood using a NAS OS and hosting non-NAS services there, it feels upside down. I would rather have a general purpose server OS with running NAS services. Same applies to Proxmox.
I've started building a kubernetes cluster (Talos Linux) across town with wireguard between various houses. ZFS boxes for persistent volumes (democratic-csi) in each "zone" with cross-site snapshot replication and Gateway (Traefik) running at each site behind the ISP. CrunchyPGO allows separate StorageClasses to easily split the leader/followers up.
A lot of people are talking about their backup storage solutions in here, but it's mostly about corporate cloud providers. I'm curious if anyone is going more rogue with their solution and using off-prem storage at a friend's house.
Which is to say, hardware is cheap, software is open, and privacy is very hard to come by. Thus I've been thinking I'd like to not use cloud providers and just keep a duplicate system at a friends, and then of course return the favor. This adds a lot of privacy and quite a bit of redundancy. With the rise of wireguard (and tailscale I suppose), keeping things connected and private has never been easier.
I know that leaning on social relationships is never a hot trend in tech circles but is anyone else considering doing this? Anyone done it? I've never seen it talked about around here.
Neat!
> Right now, accessing my apps requires typing in the IP address of my machine (or Tailscale address) together with the app’s port number.
You might try running Nginx as an application, and configure it as a reverse proxy to the other apps. In your router config you can setup foo.home and bar.home to point to the Nginx IP address. And then the Nginx config tells it to redirect foo.home to IP:8080 and bar.home to IP:9090. That's not a thorough explanation but I'm sure you can plug this into an LLM and it'll spell it out for you.
I did the exact same thing except a virtualized opensense router and bare metal kubernetes on one host. The kubernetes broke and I downgraded from 32GB of RAM to 16GB . I actually may revisit the setup since opensense FRR and Cilium BGP to peer your cluster and home LAN is actually a really seamless way to self host things in kubernetes. Maybe there are other ways, maybe there is something simpler, but a homelab is about fun more than pure function.
you can use https://nginxproxymanager.com/ to manage various services on your homelab. it works flawlessly with Tailscale - I can connect to my tailnet and simply type http://service.mylocaldomain to open the service. you will also need adguard -> adguard dns rewrite -> *.mylocaldomain forwards to the NPM instance and NPM instance has all the information of which IP:PORT has which service Also tailscale DNS should be configured to use adguard -> you can turnoff adblock features if it interferes with any of your stuff.
I would also suggest to use two instances of adguards - one as backup two instances of NPM.
TrueNAS works perfectly as a VM eg on Proxmox with passing through a SATA controller from the motherboard. It may not work always with bad IOMMU groups, but I have this on an old Xeon Precision Tower 3420 and not so old Asus Z690 motherboard. NVMe passthrough should be straightforward as well. No need for LSIs or cheap PCI-to-SATA cards if the number of existing physical slots is enough. And as far as TrueNAS is concerned, it's baremetal disk access. Even the latest TrueNAS is not in the same league as Proxmox for managing VMs/containers, not even close.
use cloudflare & cloudflare tunnels for exposing your apps over internet via custom domains. Its free of costs. Tailscale only allows 3 devices i suppose. If we have more devices to be able to connect to , then cloudflare is the best .
This is a lot of my similar setup in hardware. I just repurposed a PC I was using for windows that I barely used anyways. I would like to move that to a Framework Desktop mounted in my mini rack at some point though.
I ended up making my own dashboard app, not as detailed as Scrutiny because I just wanted a central place that linked to all my internal apps so I didn't have to remember them all and have a simple status check. I made my own in Go though because main ones I found were NodeJS and were huge resource hogs.
I’m using a refurbed m4 Mac mini, connected to a unifi nas pro 8, super fun and straightforward. Feels like I only have to do the tinkering I want to do.
Why are you using restic, when TrueNAS offers native solutions to backup your data elsewhere?
Have a look at Headscale to avoid the cost of Tailscale for small home setups.
Get yourself a custom domain and just use subdomains. Nothing says a public dns server has to return public ips. Bonus you can get https certs with certbot and dns challenge.
I learned about Mealie.io, thanks.
This is extremely light - not a bad setup, but I mean.. it's like 1% of typical Homelabs.
Hard pass whenever you host long-term storage without ECC memory.
[dead]
>Because all of my services share the same IP address, my password manager has trouble distinguishing which login to use for each one.
In Bitwarden they allow you to configure the matching algorithm, and switching from the default to "starts with" is what I do when I find that it is matching the wrong entries. So for this case just make sure that the URL for the service includes the port number and switch all items that are matching to "starts with". Though it does pop up a big scary "you probably didn't mean to do this" warning when you switch to "starts with"; would be nice to be able to turn that off.