How would a new format be backwards-compatible? At least JPEG-XL can losslessly compress standard jpg for a bit of space savings, and servers can choose to deliver the decompressed jpg to clients that don't support JPEG-XL.
Also from Wikipedia:
Computationally efficient encoding and decoding without requiring specialized hardware: JPEG XL is about as fast to encode and decode as old JPEG using libjpeg-turbo
Being a JPEG superset, JXL provides efficient lossless recompression options for images in the traditional/legacy JPEG format that can represent JPEG data in a more space-efficient way (~20% size reduction due to the better entropy coder) and can easily be reversed, e.g. on the fly. Wrapped inside a JPEG XL file/stream, it can be combined with additional elements, e.g. an alpha channel.
I didn't mean that bash has no local variables, but rather that if you want to use a function as such without capturing stdout, you need variables that are scoped across your functions, which is usually global or at least effectively global.
Bash has its upsides too, like the fact that it has arrays / lists and dictionaries / hashmaps. In my opinion, it gets iffy though when you need to do stuff with IFS; at that point one might be better off just using specialized tools.
Not saying working bash isn't good enough, but it can break in very surprising ways is my experience.
Not sure I'd call what bash has functions. They're closer to subroutines in Basic than functions in other languages, as in you can't return a value from them (they can only return their exit code, and you can capture their stdout and stderr). But even then, they are full subshells. It's one of the reasons I don't really like Bash, you're forced into globally or at least broadly-scoped variables. Oh, and I have no clue right now how to find where in your pipe you got a non-null exit code.
It's not a big problem for simple scripting, but it makes things cumbersome once you try to do more.
In all seriousness though, the core of the technical stack has become very robust in my opinion (DNS being the exception). From a hobbyist's perspective, things work much better than when the Web was still young. I can run multiple sites (some of them being what are today called apps) on a domain with subdomains, everything fast, HTTP3-capable, secured via valid free TLS certs, reverse proxied, all of that running on a system deployed in minutes...
If you focus on the part of the Internet that you have control over, it's a lot better than back in the simple days.
Similar story happened to me literally yesterday. Wanted a new vacuum, saw that a construction store chain that has a store nearby has some on offer, research them for a while on my phone, go buy it, use it.
Then later, I get ads for that exact model and some others from that exact store on my phone while browsing for something completely unrelated.
Yeah, not system can know that I already bought something offline, but still...
I think this is a huge release of just because of accessibility, that's always been a pain point (read: basically impossible) with LaTeX, I heard ConTeXt is better there but I never got into it. typst on the other hand is very approachable and makes a lot of sense.
While I don't need accessibility very much nowadays, it's basically a requirement for usage in the public sector here as PDF/UA. Which I guess is the main motivation.
Looking forward to trying it out when it hits my repositories, which should be soonish.
Another option is docbook, but I never particularly enjoyed working with that...
Similarly here. Have an Odroid with that platform, it wasn't cheap but it came with several advantages:
4 SATA ports on addition to the M2 slot
Intel QSV
2 x 2.5 Gbit Ethernet (I only have gigabit at home though)
Very powerful machine for the power usage, I ran a really old Athlon before though (from 2010 or so that I retrofitted with 16GB RAM) that did most stuff just fine. But I wanted some transcoding and also possibly a smaller case.
It's too funny to me that Arch of all distributions attracts the thigh /Unix socks crowd (for lack of better word). Nothing about Arch stands out for me in that regard, there's no social statement or anything, and when I was more active in the community, it wasn't known for that.
I was deep enough into Arch to run my own private repository using aurutils, but no thighs :(
I don't only run a reverse proxy because of having only a single public IPv4 address, but that probably is the best part
In general, I'd say reverse proxies make things somewhat easier to manage, especially when it comes to TLS. No need for every service to integrate it.