

It’s mainly Linux Unplugged where that stuff leaks into it. I haven’t heard it on “self hosted” very much.
It’s mainly Linux Unplugged where that stuff leaks into it. I haven’t heard it on “self hosted” very much.
Calibre is used as a server all the time, see calibre-web.
calibre-web
is technically not Calibre and is written and maintained by different people, although it does use the Calibre database (and I believe it must be created with desktop Calibre initially). But it’s a good option and I highly recommend it.
you just load your books from Calibre (or right through USB if you’re hardcore for some reason) and you’re basically off to the races.
There’s also an OPDS server option with calibre-web
that you can use to load books from if you’re using koreader
.
You can also use the Kobo server replacement option with calibre-web
although I personally couldn’t get it to work at the time I tried it. But this will give you a sync option that works like the official Kobo server which is quite nice.
You can also set --accept-dns to false with the commandline client although magic DNS etc won’t work.
AMDGPU virtio native context is somewhat of an equivalent to the other options, although the pieces are not all available yet. Linux guest only as well.
And there’s Venus but that’s for Vulkan only (but a lot can be done with that alone on Linux guests).
Mailrise combined with an apprise notifier of your choice (I use gotify).
The other thing is that my libraries are alphabetical in Jellyfin, so “Anime” comes before “Kaiju”, and I truly can’t stand the idea that Godzilla gets sent to the back of the bus.
If you mean the order the libraries are listed in the web interface, you change that from “User settings” -> “Home”.
Plex is closed source and gradually being enshittified. You might not leave today, but you should have an exit plan.
One issue I could see is using it not as a second opinion, but the only opinion. That doesn’t mean this shouldn’t be pursued, but the incentives toward laziness and cost-cutting are obvious.
EDIT: One another potential issue is the AI detection being more accurate with certain groups (i.e. White Europeans), which could result in underdiagnosis in minority groups if the training data set doesn’t include sufficient data for those groups. I’m not sure if that’s likely with breast cancer detection, however.
There’s also calibre-web for a self-hosted option with a web interface.
btrbk works that way essentially. Takes read-only snapshots on a schedule, and uses btrfs send/receive to create backups.
There’s also snapraid-btrfs which uses snapshots to help minimise write hole issues with snapraid, by creating parity data from snapshots, rather than the raw filesystem.
Putin and Trump are best friends so you might expect Russia to follow. But perhaps Putin wants to show he’s the dominant one in said relationship.
Not the modern X.com to be clear; Musk just has a weird fetish for the letter X like an edgy teenager.
I’m not arguing in the slightest that FLAC shows an audible difference in most cases for most tracks. However, it just makes sense as an archival format given it’s lossless which means you can transcode to any other format without generational loss.
This means if there is a massive breakthrough in lossy compression in the future, I can use it for mobile purposes. If you store as lossy, you’re stuck with whatever losses have been incurred, forever.
Could be useful for web articles and scientific papers too (if it could be configured to ignore reading out all of the boiler plate and citations).
Store the original library as FLAC, then transcode on-the-fly (or once if you don’t want to use something like Navidrome or Jellyfin).
The main benefit to lossless is for archival purposes. I can transcode to any format (such as on mobile) without generational quality loss.
And it means if a better lossy format comes out in the future, I can use that without issue.
There are better lossy formats, like opus.
But MP3 still has its place as it’s supported everywhere.
There’s likely a firewall on the system that hosts the docker services, and docker’s default bridge rules bypass it when publishing a port. And since the docker rules are prioritised, it can be quite difficult to override them in a reliable way. I personally wish that the default rules would just open a rule to the host, but there might be some complexity that I’m missing that makes that challenging.
I personally use host networking to avoid the whole mess, but be aware you’ll have to change the internal ports for a bunch of services most likely, and that’s not always well-documented. And using the container name as the host name won’t work when referencing other containers, you’ll have to use e.g. localhost:<port number> even inside the network.
You can do the bind to localhost thing that others have mentioned, as long as the reverse proxy itself is inside the docker network (likely there are workarounds if not).
It’s not always takedowns either, just the developer deciding to nuke their own repos. Real annoying, although it’s making me more vigilant about forking/mirroring important repos.