← Back to Blog

Week in Review: Feb 16 - Feb 22, 2026

5 min read By Charles

This was one of those weeks where everything seemed to happen at once. A major smart panel integration finally shipped, my media transcoding pipeline got unstuck after weeks of silent failures, and I spent an entire day down a rabbit hole designing a custom battery pack for a dashcam project. Let’s get into it.

Span Gen3 Finally Lands in Home Assistant

The big milestone this week was shipping the Span smart panel Gen3 gRPC integration and publishing it to PyPI. For context, the Span panel monitors every circuit in my electrical panel, and the Gen3 firmware moved from a REST API to gRPC — which meant the existing integration needed a significant rewrite.

The tricky part was getting the circuit mapping working correctly. Once that was sorted, Home Assistant picked up 143 new Gen3 entities. But the migration wasn’t just about adding new stuff — I also had to clean house. There were 104 stale Gen2 entities cluttering the system, and about 100 Gen3 entities needed renaming to match my naming conventions.

One gotcha worth documenting: deploying the updated integration required stopping HA Core first, then doing the update, then restarting. If you just restart HA with the new integration in place, the .storage files get overwritten on restart and you lose your entity customizations. Learned that one the fun way.

ESP32 Dashcam: The Battery Problem

I spent a full day researching battery options for my ESP32 dashcam project. The core challenge is Florida — the inside of a parked car can easily hit 150°F in summer, which is brutal on batteries.

After a lot of research, I landed on a 4S LiFePO4 configuration using Headway 38120S 10Ah cells. LiFePO4 chemistry handles heat significantly better than the more common NMC (nickel manganese cobalt) cells. The thermal runaway threshold is much higher, and they degrade less at elevated temperatures.

The battery management side is a JKBMS with active balancing, paired with a charging circuit that ties into ignition detection. The idea is that the pack charges when the car is running and the BMS handles cell balancing and protection. When the car is off, the dashcam runs on battery with intelligent shutdown thresholds to preserve cell longevity.

I haven’t started building any of this yet — this week was pure research and design. But the bill of materials is locked in and I’m feeling good about the thermal characteristics for automotive use.

Tdarr: Why My Transcoding Was Silently Failing

This one had been bugging me for weeks. Tdarr was running, jobs were queuing up, but nothing was actually getting transcoded. No errors in the logs — just… nothing happening.

The root cause turned out to be embarrassingly simple: the transcoding plugin was configured for NVENC GPU encoding, but my V100s are compute-only GPUs. They have no video encode or decode hardware (no NVENC/NVDEC). They’re designed for AI workloads, not media processing. So every transcode job would silently fail and get requeued.

The fix was swapping to the CPU-based CRF plugin using libx265. I dialed in the settings at sdCRF=19, hdCRF=21, fullhdCRF=22, with ffmpegPreset set to medium and b-frames enabled. One small gotcha: the plugin inputs are camelCase (ffmpegPreset, sdCRF), not snake_case — the plugin just silently ignores incorrectly-cased parameters. Lovely.

With one CPU transcode worker running 24/7, I’m getting about 5 files per day processed and have already saved 3.6GB. Not blazing fast, but it’s steady progress through a library of nearly 29,000 files.

Cache Drive Upgrade

I replaced the two aging Intel 480GB SATA SSDs (960GB total) with a single Crucial T500 4TB PCIe Gen4 NVMe. The performance difference is night and day — Gen4 NVMe versus SATA is not even a fair comparison.

Beyond the speed boost, this freed up two drive slots that I’m earmarking for a planned third V100 GPU. More VRAM means I can run larger quantized models locally, which feeds directly into the AI routing work below.

Share configurations got optimized during the migration: critical shares like appdata and AI projects are set to prefer cache, while bulk media stays on the array.

AI-First Request Routing

I made a fundamental architectural change to my personal assistant bot this week. Previously, requests would hit Claude first and only fall back to the local Ollama instance for simple stuff. I flipped that entirely — now every request hits Ollama first.

The local model (running on those V100s) gets first crack at everything with a full set of 28 tools available. It handles the vast majority of requests perfectly well — status checks, smart home control, quick questions, calculations. Only when a request genuinely needs stronger reasoning does it escalate to Claude.

The key technical change was switching from /api/generate to /api/chat on the Ollama side, which enables native tool calling instead of the hacky prompt-based approach I was using before. I also added system context injection from a set of knowledge files, so the local model has awareness of my setup, preferences, and device names.

To support this, I deployed a ChromaDB vector store populated with my HA automations, packages, sensor configs, documentation, and ESP32 project files. Semantic search with cosine distance pulls relevant context into prompts, so the model can answer questions about my specific setup without needing to be fine-tuned on it.

Mobile App and Docker Housekeeping

The mobile companion app got a flurry of updates this week — notification reliability improvements, background polling, push notifications via Expo, a web search tool, AMOLED black mode, code block rendering with syntax highlighting, and a conversation management system. A lot of polish work that makes the daily experience much smoother.

On the infrastructure side, I migrated Docker storage from btrfs to overlay2, which meant rebuilding a few containers that didn’t survive the transition. Also hit a BuildKit cache issue that required clearing the build cache entirely before things would build cleanly again.

What’s Next

Next week I’m planning to dig into the ESP32 dashcam firmware itself now that the battery design is locked in. I also want to get the HA backup strategy automated — it’s verified and working, but still requires manual triggers. The Tdarr pipeline is humming along and I’ll keep an eye on throughput to see if adding a second CPU worker is worth the resource trade-off.

The AI routing changes need some real-world mileage to tune the escalation thresholds. Right now I’m erring on the side of escalating too often, but as I build confidence in the local model’s tool-calling reliability, I’ll tighten that up.

Enjoyed this post?

Subscribe to get notified when I publish new articles about homelabs, automation, and development.

No spam, unsubscribe anytime. Typically 2-4 emails per month.

Related Posts