Week in Review: Jan 26 - Feb 1, 2026
This is the first weekly recap for GriswoldLabs, and it feels right that the first week was about building the foundation. Everything that happens from here — the automations, the apps, the AI experiments — sits on top of what went into place this week. So let’s walk through it.
The Site Goes Live
The website you’re reading this on launched this week. It’s built with Astro 5 and Tailwind 4, deployed to Cloudflare Pages. Static site generation, no server-side rendering, no database — just markdown files that compile into fast, clean HTML.
I went with Cloudflare Pages over self-hosting for a simple reason: I don’t want my public-facing site to depend on my homelab’s uptime. The homelab is for experimentation. The website should just work. Cloudflare’s edge network handles caching and delivery, and the deploy pipeline is straightforward — build locally, push with Wrangler, purge the cache. Done.
The first blog posts went up covering some of the technical work from the week: Home Assistant automation patterns, the V100 GPU setup, local AI inference with Ollama, and the multi-worker Claude Code configuration. More on all of those below.
64GB of VRAM on a Budget
The headline hardware addition this week was two Tesla V100-PCIE-32GB GPUs going into the Unraid server. That’s 64GB of VRAM total, which opens up serious local AI inference without paying per-token to a cloud API.
The V100 is a 2017 datacenter card. You can pick them up used for a fraction of what modern consumer GPUs cost, and for LLM inference — where you need VRAM more than you need bleeding-edge tensor cores — they’re hard to beat on price-per-gigabyte. The tradeoff is no NVENC (so no hardware video encoding) and higher power draw than something like an RTX 4060. But for running large language models locally, the math works out.
With Ollama deployed on top of the V100s, I can run models locally for development, automation, and experimentation. No API keys, no rate limits, no data leaving the network. The initial setup went smoothly — Ollama handles multi-GPU inference well, and having 64GB of VRAM means I can load substantial models without aggressive quantization.
Home Assistant Takes Shape
Home Assistant has been running on my network for a while, but this week I put real structure around the configuration. The key decisions:
YAML-based automations over the UI editor. The HA visual editor is fine for quick experiments, but for anything I want to version control, review, or share across workers, YAML files in a git repo are the way to go. I split automations, scripts, switches, and sensors into their own directories. Each file is a self-contained unit that can be reviewed in isolation.
Wake-on-LAN switches were one of the first practical additions. I have machines on the network that don’t need to run 24/7, but I want to be able to bring them up from HA without walking to a keyboard. WoL switches let me treat power state like any other toggle in a dashboard.
UniFi Protect integration got fixed and configured. The UniFi cameras were already on the network, but the HA integration needed some attention to pull in feeds and motion events reliably. That’s working now, which means camera status and motion detection can participate in automations.
The first dashboard went up too — nothing fancy, just the essentials organized into a single view. Dashboards tend to evolve over time as you figure out what you actually check versus what you thought you’d check.
Infrastructure Plumbing
A few infrastructure pieces came together this week that aren’t glamorous but matter a lot:
Self-hosted Gitea is now running on Unraid. Every project, configuration file, and script lives in a git repo. Having a local git server means I’m not dependent on GitHub for day-to-day development, and sensitive configurations (HA automations with network details, deployment scripts) stay on my network.
Cloudflare Tunnel replaced the traditional reverse proxy approach for external access. Instead of opening ports and managing certificates, the tunnel creates an outbound connection from my network to Cloudflare’s edge. No inbound firewall rules, no dynamic DNS, no certificate renewal headaches. It’s the cleanest way I’ve found to expose specific services externally while keeping everything else locked down.
Claude Code multi-worker setup was an interesting engineering challenge. I have multiple Claude Code sessions running inside a single Docker container, coordinated through a file-based locking system. Each worker claims a project before editing it, checks for conflicts, and releases the lock when done. It’s simple — just markdown files that workers read and write — but it prevents the chaos of two AI agents editing the same codebase simultaneously.
The App Development Philosophy
I’ve had WiFi Share on the Google Play Store for a while now (currently at v1.2.2). This week I formalized the development philosophy that guided that app and will guide everything going forward: privacy-first, offline-first, local storage, no cloud dependency.
Every app I build should work without an internet connection. User data stays on the device. No accounts, no sign-ups, no analytics SDKs phoning home. If an app needs network access for its core function (like sharing a QR code), that’s fine — but the default state is offline and functional.
This isn’t just an ideological stance. It’s practical. Apps that don’t depend on a backend don’t break when a server goes down. Apps that don’t collect data don’t need privacy policies longer than the app itself. Apps that work offline work everywhere — on planes, in basements, in rural areas with spotty coverage.
What’s Next
Week one was about laying groundwork. The server has GPUs. The AI inference stack is running. Home Assistant has structure. The website is live. Version control is local. External access is secure.
Next week I want to push into more Home Assistant automations — the kind that actually make daily life easier rather than just proving the system works. I also want to start exploring what’s possible with local AI inference beyond just chat — things like voice processing, automated summaries, and integration with the home automation layer.
The foundation is solid. Time to build on it.
Enjoyed this post?
Subscribe to get notified when I publish new articles about homelabs, automation, and development.
No spam, unsubscribe anytime. Typically 2-4 emails per month.