Week in Review: Feb 9 - Feb 15, 2026
This was one of those weeks where every project touched something else. What started as “let me automate blog posting” turned into an IP audit, which turned into scrubbing old posts, which reminded me I still had Node-RED running for no good reason. One thread pulls the whole sweater. Here’s the rundown.
The Blog Pipeline
I’ve been meaning to automate the blog for a while. Writing recaps by hand is fine, but if I’m already logging everything I do in a changelog, why not let the machines do the heavy lifting?
The pipeline works like this: a bash script pulls recent changelog entries, sanitizes any sensitive data (more on that in a second), and feeds them to a lightweight local model that decides whether there’s enough interesting content to justify a post. If the answer is yes, it hands the material off to Llama 3.3 70B running on my local Ollama instance for the actual writing. My personal assistant bot orchestrates the whole thing and sends me a notification when the draft is ready.
It’s not fully hands-off — I still review and edit before publishing — but it cuts the “staring at a blank page” problem down to zero. The hardest part was getting the prompt engineering right so the output doesn’t sound like a corporate press release.
The IP Problem
Building the blog pipeline forced me to confront something I’d been ignoring: real internal IP addresses scattered across existing blog posts. A quick grep turned up 15 instances of actual internal IPs — subnet addresses, specific host IPs, port numbers. Not catastrophic, but not great either.
I scrubbed all of them down to generic examples like 10.0.x.x and built the sanitization directly into the pipeline. Now any changelog entry that references a real internal address gets automatically cleaned before it ever hits the writing model. One less thing to worry about.
Goodbye, Node-RED
I had 15 Node-RED flows. Sounds like a lot of automation, right? Turns out only 5 of them were doing anything real. The rest were test flows, abandoned experiments, or duplicates of things I’d already rebuilt elsewhere.
The five real automations were:
- A fence/light timer (turn on lights at sunset if certain conditions are met)
- Home zone tracking (who’s home, who isn’t)
- Three actionable notification flows (garage door left open, front door alerts, lamp control)
All five converted cleanly to native Home Assistant YAML automations. The YAML versions are more maintainable, version-controlled, and don’t depend on a separate container. Once I confirmed everything was working, I killed the Node-RED container entirely.
Dependency reduction is one of those things that doesn’t feel productive in the moment but pays off every time you don’t have to debug a flow editor crash at 2 AM.
Home Assistant: The Update That Fought Back
Updated Home Assistant from 2026.1.3 to 2026.2.2, which also meant updating 13 individual components. The update itself went smoothly. The restart did not.
HA dropped into recovery mode — only 39 out of 319 components loaded. That’s the kind of number that makes your stomach drop. After some digging, I found two breaking changes:
- System Monitor now requires its own dedicated configuration file instead of being inline in
configuration.yaml. Not a huge deal, but zero warning in the release notes I’d read. - Trusted proxies format changed. The old format was silently accepted but not actually applied, which meant my reverse proxy setup was technically broken until I reformatted the entries.
Both fixes took about 20 minutes once I identified them, but finding them in the first place took considerably longer. The lesson, as always: read the full breaking changes list, not just the highlights.
Plex Database Surgery
This one was fun in the way that defusing a bomb is fun.
My Plex database — a 462MB SQLite file — developed corruption. The symptoms: “malformed disk image” errors, rowid entries out of order, and 18+ missing FTS (full-text search) index rows. Plex was mostly functional but search was broken and certain library operations would throw errors.
The fix required using Plex’s own ICU-aware SQLite binary (not the system SQLite, which lacks the ICU extension Plex depends on) to dump the entire database to SQL and restore it into a clean file. The process:
- Stop Plex
- Use Plex’s bundled SQLite to run
.dumpon the corrupted database - Pipe the dump into a fresh database file
- Verify integrity with
PRAGMA integrity_check - Swap in the new file and restart
Root cause was almost certainly an unclean shutdown — either a power event or a container getting killed mid-write. SQLite is remarkably resilient, but it’s not magic. I’ve since added the Plex container to my UPS-aware shutdown sequence.
Security Hardening
Did a security pass on my personal assistant bot. The big items:
- Hardcoded API keys moved to environment variables. This is embarrassing to admit was still the case, but better late than never. Every key, token, and secret now lives in
.envfiles that aren’t checked into version control. - Per-user rate limiting added to prevent any single user from hammering the API endpoints.
- Circuit breakers for external API calls, so a downstream service being slow doesn’t cascade into the bot becoming unresponsive.
- PDF document upload handler — not strictly security, but it was part of the same work session.
Leantime for Project Management
Deployed Leantime as a proper project management tool. I’d been tracking tasks in markdown files and my head, which works until it doesn’t. Built a CLI helper script so I can create, move, and close tasks without leaving the terminal. Imported about 30 existing tasks from my various TODO lists. Having a real kanban board with proper state tracking already feels like an upgrade.
Website Polish
Knocked out 11 smaller improvements to the GriswoldLabs website:
- Standardized data formats across pages
- Refined the theme color palette
- Added scroll-triggered animations
- Built a scroll-to-top button
- Redesigned the 404 page (it was embarrassingly plain)
None of these are individually exciting, but collectively the site feels significantly more polished.
Other Bits
- Ollama context length fix — Tracked down a bug where my local LLM instance was pegging the CPU in a reload loop. The culprit was a context length misconfiguration that caused the model to repeatedly attempt loading with impossible memory requirements. Setting explicit context length parameters fixed it.
- Smart panel research — Started poking at the gRPC protocol used by a smart electrical panel. Early days, but the goal is local control without depending on a cloud service. Reverse engineering proprietary protocols is a hobby I probably shouldn’t enjoy as much as I do.
Looking Ahead
Next week I want to finish the smart panel protocol research and see if I can get basic read access to circuit-level data. The Node-RED elimination freed up some resources I’d like to reallocate. And I have a growing backlog of Home Assistant automations that have been waiting for the HA update to settle before I build them out.
The blog pipeline should also start earning its keep — if it works as designed, next week’s post will be the first one where the first draft is fully automated. We’ll see how that goes.
Enjoyed this post?
Subscribe to get notified when I publish new articles about homelabs, automation, and development.
No spam, unsubscribe anytime. Typically 2-4 emails per month.