However, accessibility features provided by third-party applications may be worse in some aspects. Please open a bug report if you have any special requirements that we don’t cover yet! This is an active topic we’re very interested in improving.
I bought a Model 3 SR+ in 2019 because it was pretty much the only decent option, also still driving it.
BYD and other Chinese brands were not available here yet and German manufacturers were asleep at the wheel.
The best coming out of Germany at that time were repurposed chassis from ICE cars, with all the flaws that brings. The Leaf lacked water cooling on the batteries.
The best alternative at that time was a classic Hyundai Ioniq but it had a 28 kWh battery where as the Model 3 SR+ had a 52 kWh battery for 10.000€ more.
Since you own an e-Golf, just to put some numbers on this. (e-Golf left, Model 3 SR+ right)
Efficiency: 168 Wh/km vs 146 Wh/km
Battery: 32 kWh vs 52 kWh
Fast charging: 39 kW vs 105 kW (later patched to 170 kW peak)
Acceleration: 9.6s vs 5.6s 0-100
Weight: 1615 kg vs 1700 kg
Price: 32.000€ vs 45.000€
Charger network: Whatever ionity was doing vs Superchargers
In addition to these guys knowing what they are doing and pushing firmware updates straight through Home Assistant, every purchase also supports the Open Home Foundation.
I'm pretty sure you can achieve similar performance with cheaper dongles.
When they first released ZHA the interface was very barebones compared to Z2M. I saw the current Home Assistant interface in their stream on the ZBT-2 and it looks a lot more like a proper Zigbee interface now.
I don't think there is going to be much of a performance difference between ZHA and Z2M, mostly just how you interact with it.
I have been waiting for them to release the Zigbee equivalent to their ZWA-2. Ordered one.
Does anybody use Zigbee directly in Home Assistant? I'm currently still on Zigbee2MQTT but I'm wondering if I should switch over to the Zigbee integration in Home Assistant.
I finally moved my mail server from Hetzner to my homelab.
Pretty smooth sailing so far. For now I'm using Scaleway for outgoing mails since I can't set a PTR record here but I might just try sending a few without PTR to see how other providers react.
Just like everyone doing open heart surgery on dummies is fine, everyone self-hosting in their own network is fine. You can buy hardware right now that connects to power and wifi and you are self-hosting.
I bought Resident Evil 0 on GOG yesterday but Heroic wouldn't download the game for some reason (stuck at 0%). Refunded, got it on Steam for cheaper and it launched right away.
Sometimes I purchase on GOG out of principle and for some reason they always punish me for it.
Not sure if it counts as "budget friendly" but the best and cheapest method right now to run decently sized models is a Strix Halo machine like the Bosgame M5 or the Framework Desktop.
Not only does it have 128GB of VRAM/RAM, it sips power at 10W idle and 120W full load.
It can run models like gpt-oss-120b or glm-4.5-air (Q4/Q6) at full context length and even larger models like glm-4.6, qwen3-235b, or minimax-m2 at Q3 quantization.
Running these models is otherwise not currently possible without putting 128GB of RAM in a server mainboard or paying the Nvidia tax to get a RTX 6000 Pro.
I can't speak for client capabilities on Apple devices, but what's your server hardware? CPU or GPU transcoding?
I have an AMD GPU in my server and have no issues transcoding AV1 and H265 for my lesser capable clients.
You can also setup Jellyfin in parallel to Plex and give it a whirl.