𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 

Ceterum Lemmi necessitates reactiones

  • 5 Posts
  • 733 Comments
Joined 3 years ago
cake
Cake day: August 26th, 2022

help-circle

  • I don’t think these two academics are suffering from disinterest or a lack of subject expertise.

    Perhaps not, but successful academics will also understand and tailor their messaging to their audience, dumbing it down if necessary.

    I think that’s a very purposeful result of US politics

    Complete agreement. So is the international hegemony of the dollar, and why, a decade ago, the US government had an apoplectic fit when OPEC made noises about starting to accept payments in, or fixing prices to, something other than the dollar. I don’t think the general public understood just how important it is that so many of the global currencies are tied to the value of the dollar, and, like English, it’s the financial lingua franca.

    I disagree that it’s been overall bad for the US, although I think it’s been extremely unhealthy for the world at large.


  • Any subject can be qualified, and you’re right that more things probably should.

    There’s capitalism; then, for me, there’s laissez-faire capitalism and regulated capitalism as the two main branches. Somewhere within laissez-faire economics lies libertarianism and anarchy, which are political structures, but the implementation of which would presumed literally no central control - people trading precious metals, goods and services directly. On the other branch you have regulated markets that eventually include limited socialism - usually restricted to public infrastructure and military, and where you start to blend in aspects of communism. And while I’m certain there are technical terms for all of the, I care a little less about economics than I do sports, which is to say not at all.

    I have my own opinions about what I think is wrong with Capitalism in the US, and what changes I think could fix them, but this is decidedly not my area of expertise and I’m very much a believer of differentiating between opinions and knowledge.

    What you’re seeing, I think, is a limitation in American education combined with DILIGAF - disinterest in becoming enough of a subject expert to use precise terminology. However, I think it’s misplaced to get upset about it; I’m certain medical doctors, aerospace engineers, computer engineers, plumbers, electricians, auto mechanics, classical musicians, voting theory scientists - they all probably mentally tear their hair out a little when talking to laypeople because we’re all so “imprecise” in our terminology. I think it’s just a consequence of living in a world so complex and varied, it’s not possible for one person to be an expert and use precise terminology when talking about every subject - and this includes economics.


  • It actually is RAID5/6 I’m looking for. Striping for speed isn’t important to my, and simple redundancy at a cost of 1/2 your total capacity isn’t a nice as getting 3/5 of your total capacity while having drive failure protection and redundancy.

    Used to go the device mapper and LVM route, but it was a administrative nightmare for a self-hoster. I only used the commands when something went wrong, which was infrequent enough that I’d forget everything between events and need to re-learn it while worrying that something else would fail while I was fixing it. And changing distros was a nightmare. I use the btrfs command enough for other things to keep it reasonably fresh; if it could reliably do RAID5, I’d have RAID5 again instead of limping along with no RAID and relying on snapshots, backups, and long outages when drives fail.

    Multi device is only niche because nothing else supports it yet. I’ll bet once bcachefs becomes more standard (or, if, given the main author of the project), you’ll see it a lot more. The ability to use your M.2 but have eventual consistency replication to one or more slower USB drives without performance impact will be game changing. I’ve been wondering whether this will be usable with network mounted shares as level-3 replication. It’s a fascinating concept.



  • Shit, that’s a lot of storage. K.

    I’ve lived on btrfs for years. I love the filesystem. However, RAID had been unreliable for a decade now, with no indication that it will ever be fixed; but most importantly, neither btrfs not zfs have prioritized multi-device support, and bcachefs does.

    You can configure a filesystem built from an SSD, a hard drive, and a USB drive, and configure it so that writes and reads go to the SSD first, and are eventually replicated to the hard drive, and eventually eventually to the USB drive. All behind the scenes, so you’re working at SSD speeds for R/W, even if the USB hasn’t yet gotten all of the changes. With btrfs and zfs, you’re working at the speed of the slowest device in your multi-device FS; with bcachefs, you work at the speed of the fastest.

    There’s a lot in there I don’t know about yet, like: can it be configured s.t. the fastest is an LRU? But from what I read, it’s designed very similar to L1/L2 cache and main memory.


  • How much is “limited?” I’ve got one of those AMD Ryzen mobile CPU jobs that I bought new, from Amazon, for $300. I added a 2TB M.2 drive for another $100. For a bit over $200 ($230?) you can get a 4TB M.2 NVMe.

    And that’s for fast storage. There’s USB3 A and C ports, so nearly unlimited external - slower, but still faster than your WiFi - drives.

    When bcachefs is reliable, it’s got staged multi-device caching for the stuff you’re actually using, and background writing to your slower drives. I’m really looking forward to that, but TBH I have all of our media on a USB3 SSD it’s plenty fast enough to stream videos and music from.


  • I’m only concerned insofar as I don’t know of a good alternative, and really don’t want to spend the time shifting everything to a new system. I have 3 VPSes and 4(? 5?) home computers backing up to B2. The major ones, I have also backing up to disk, so really the risk for me is in that gap period while I find and set up on a new backup service.

    This will be beyond annoying, but for me not catastrophic. Mainly, I’ve liked B2 - the price, and how easy it’s been to use. I understand the UI; it’s pretty straightforward, and it’s directly supported by a lot of software. It would be a real shame if it went under due to mismanagement.

    Also: another example supporting my theory that one of the major flaws in Capitalism is public trading markets. This shit wasn’t an issue before they went public.









  • It’d be more space efficient to store a COW2 of Linux with a minimum desktop and basically only DarkTable on it. The VM format hasn’t changed in decades.

    Shoot. A bootable disc containing Linux and the software you need to access the images, and on a separate track, a COW2 image of the same, and on a third, just DarkTable. Best case, you pop in the drive & run DarkTable. Or, you fire up a VM with the images. Worst case, boot into linux. This may be the way I go, although - again - the source images are the important part.

    I’d be careful with using SSDs for long term, offline storage.

    What I meant was, keep the master sidecar on SSD for regular use, and back it up occasionally to a RW disc. Probably with a simply cp -r to a directory with a date. This works for me because my sources don’t change, except to add data, which is usually stored in date directories anyway.

    You’re also wanting to archive the exported files, and sometimes those change? Surely, this is much less data? Of you’re like me, I’ll shoot 128xB and end up using a tiny fraction of the shots. I’m not sure what I’d do for that - probably BD-RW. The longevity isn’t great, but it’s by definition mutable data, and in any case the most recent version can be easily enough regenerated as long as I have the sidecar and source image secured.

    Burning the sidecar to disk is less about storage and more about backup, because that is mutable. I suppose an append backup snapshot to M-Disc periodically would be boots and suspenders, and frankly the sidecar data is so tiny I could probably append such snapshots to a single disc for years before it all gets used. Although… sidecar data would compress well. Probably simply tgz, then, since it’s always existed, and always will, even if gzip has been superseded by better algorithms.

    BTW, I just learned about the b3 hashing algorithm (about which I’m chagrined, because I thought I kept an eye out on the topic of compression and hashing). It’s astonishingly fast - for the verification part, is what I’m suggesting.


  • The densities I’m seeing on M-Discs - 100GB, $5 per, a couple years ago - seemed acceptable to me. $50 for a TB? How big is your archive? Mine still fits in a 2TB disk.

    Copying files directly would work, but my library is real big and that sounds tedious.

    I mean, putting it in an archive isn’t going to make it any smaller. Compression on even lossless compressed images doesn’t often help.

    And we’re talking about 100GB discs. Is squeezing that last 10MB out of the disk by splitting an image across two disks worth it?

    The metadata is a different matter. I’d have to think about how to handle the sidecar data… but that you could almost keep on a DVD-RW, because there’s no way that’s going to be anywhere near as large as the photos themselves. Is your photo editor DB bigger than 4GB?

    I never change the originals. When I tag and edit, that information is kept separate from the source images - so I never have multiple versions of pictures, unless I export them for printing, or something, and those are ephemeral and can be re-exported by the editor with the original and the sidecar. Music, and photos, I always keep the originals isolated from the application.

    This is good, though; it’s helping me clarify how I want to archive this stuff. Right now mine is just backed up on multiple disks and once in B2, but I’ve been thinking about how to archive for long term storage.

    I think in going to go the M-Disc route, with sidecar data on SSD and backed up to BluRay RW. The trick will be letting DarkTable know that the source images are on different media, but I’m pretty sure I saw an option for that. For sure, we’re not the first people to approach this problem.

    The whole static binary thing - I’m going that route with an encrypted share for financial and account info, in case I die, but that’s another topic.


  • This is an interesting problem for the same use case which I’ve been thinking about lately.

    Are you using standard BluRay, or M-Discs?

    My plan was to simply copy files. These are photos, and IME they don’t benefit from compression (I stopped taking raw format pictures when I switched to Fujifilm, and the jpgs coming from the camera were better than anything I could produce from raw in Darktable). Without compression, putting then in tarballs then only adds another level of indirection, and I can just checksum images directly after write, and access them directly when I need to. I was going to use the smallest M-Disc for an index and just copy and modify it when it changed, and version that.

    I tend to not change photos after they’ve been processed through my workflow, so in my case I’m not as concerned with the “most recent version” of the image. In any case, the index would reflect which disc the latest version of an image lived, if something did change.

    For the years I did shoot raw, I’m archiving those as DNG.

    For the sensitive photos, I have a Rube Goldberg plan that will hopefully result in anyone with the passkey being able to mount that image. There aren’t many of those, and that set hasn’t been added to in years, so it’ll go on one disc with the software necessary to mount it.

    My main objective is accessibility after I’m gone, so having a few tools in the way makes trump over other concerns. I see no value in creating tarballs - attach the device, pop in the index (if necessary), find the disc with the file, pop that in, and view the image.

    Key to this is

    • the data doesn’t change over time
    • the data is already compressed in the file format, and does not benefit from extra compression

  • The problem is the design is Matrix itself. As soon as a single user joins a large room, the server clones all of the history it can.

    I mean, there are basically two fundamental design options, here: either base the protocol over always querying the room host for data and cache as little as possible, or cache as much as possible and minimize network traffic. Matrix went for minimizing network traffic, and trying to circumvent that - while possible with cache tuning - is going to have adverse client behaviors.

    XMPP had a lot of problems, too, though. Although I’ve been told some (all?) of these have been addressed, when I left the Jabberverse there was no history synchronization and support for multiple clients was poor - IIRC, messages got delivered to exactly one client. I lost my address book multiple times, encryption was poorly supported, and XMPP is such a chatty protocol, and wasteful of network bandwidth. V/VOIP support was terrible, it had a sparse feature set, in terms of editing history, reactions, and so on. Group chat support was poor. It was little better than SMS, as I remember.

    It was better than a lot of other options when it was created, but it really was not very good; there are reasons why alternative chat clients were popular, and XMPP faded into the background.


  • A lot of memory, and a lot of disk space.

    Synapse is the reference platform, and even if they don’t, it feels as if the Matrix team make changes to Synapse and then update the spec later. This makes it hard for third-party servers (and clients!) to stay compliant, which is why they rise and fall. The spec management of Matrix is awful.

    So, while suggestions may be to run something other than Synapse - which I sympathize with, because it’s a PITA and expensive to run - if you go with something else just be prepared to always be trailing. Migrating server software is essentially impossible, too, so you’ll be stuck with what you pick.

    Matrix is one of the worst-managed best projects to come out in decades.