

I AM an apple guy but tend to get a galaxy A* tablet. Often on sale, reasonably well supported with updates. The A9+ has been pretty good.
I AM an apple guy but tend to get a galaxy A* tablet. Often on sale, reasonably well supported with updates. The A9+ has been pretty good.
Having been a firefox user for a few years now, Screw Mozilla. What a mismanaged shit-show they’ve become.
I get that browser development costs a ton, and that they’re in a shitty position. But to make this ode to stockholm syndrome blog post… what on EARTH?
Best case, Chrome gets split off into a separate organization free of meddling and they can fund themselves with reasonable donations / investments. In reality, I’m sure Google and other advertising companies will try to get into it and buy the behavior they want, like special-interest groups in US politics.
But if Chrome ended up under any organization with reasonable management who wasn’t completely beholden to advertisers, I’d switch back to Chrome pretty quickly (assuming the whole Manifest V2/V3 thing got un-fucked).
It’s desktop-only right now and feels like for the foreseeable future. Firefox sync works between Zen and Firefox so you can just run Firefox or one of the Android-specific versions of Firefox that support the generic/vanilla firefox sync.
Fancy firefox-based browser along the lines of Arc?
Worth a look if you’re a web power-user / developer sort of person
Yeah. Zen is a bit newer and I’d say not quite as slick an experience yet, but it has come a long way in the last couple months and is getting very good
Firefox-based https://zen-browser.app/ if you want to get fancy
Or the world blows up and it’s all over. I guess what I’m saying is, no downside, fire it up and let’s see what happens.
Also, reporting that Trump Tower (either) should be renamed Mexico Tower feels pretty good.
Apple computers ARE really well put together, maybe no other maker exactly as good. But I’d say the Microsoft Surface line is a similar quality. Razer too though they’re pretty expensive.
Asus zephyrus laptops are pretty great build quality, close to Apple but without the same kind of pricing and markup gouging we get from Apple
Im not an apple hater, they make some great stuff. My point above was just that they don’t have competition in the “I need a Mac” space so their hardware isn’t competitively priced. And their build quality is great, but not every laptop needs to be built like a tank with top of the line components.
It’s good, a lot of good work going on, what they already have is impressive and the development seems pretty active and progressing well.
But if you’re buying a laptop to run Linux and don’t plan to use macOS, I really think there are a lot of better options out there (depending on what’s important to you). You’re going to pay the Apple premium price for a computer, and though apple computers are good hardware, they’re expensive and largely overpriced for small upgrades. Whatever price you find for a refurbished M2, take that money and go find a laptop known to be well supported on Linux, it’ll just be a better experience and you’ll probably get more for your money.
I haven’t run Asahi in 6+ months but thunderbolt/usb4 wasn’t working when I last used it so I couldn’t use my usb dock. Video was OK but I think Audio was sketchy (don’t remember specifics). It’s stuff that will get fixed at some point but right now it feels like a handful of minor annoyances or inconveniences
Even in 1-2 years when Asahi gets some updates and is in a better spot (I really do expect it to be) I still don’t think I’d lean towards a macbook with Asahi over something else if Linux is the only OS you’re going to run. Of course, if you’re looking to dabble with some iOS development or something else you need a mac for, but don’t want to live in MacOS, then Asahi’s a great option to get you back to Linux.
Ah, good to know. I haven’t really used that save configuration and reuse process, I just do the install directly at the end of configuring everything. But I can see the draw for using that, a shame it doesn’t seem to work that well.
archinstall’s default btrfs layout has I think 4-5 separate subvolumes (I’m not running btrfs anymore so can’t check) but at the very least I remember it has:
being separate subvolumes and mountpoints, you can just use a previous snapshot from 1 without rolling back others
Related to the snapshotting stuff, timeshift-autosnap is pretty helpful, hooks into pacman and takes a snapshot before installing/updating packages.
Personally I found btrfs and the snapshots helpful when starting to use arch, but now that I know how not to blow things up, it has been stable enough for me I just felt ext4 was easier.
Yeah yeah yeah… yeah
no fact checking and a ton of bots being added? Sounds like a fantastic place to spend time. /s
Ollama and openwebui for a nice web interface.
Similar to previous reply about MATE with font size changes, I do that with plasma. I hadn’t seen plasma big screen you linked, I’ll definitely try that one out. I’ve wondered about https://en.m.wikipedia.org/wiki/Plasma_Mobile? Like these sort of niche projects don’t always get a lot of attention, if the bigscreen project doesn’t work out, I’d bet the plasma mobile project is fairly active and given the way it scales for displays might work really well on a tv
Speaking of scaling since you mentioned it. I have noticed scaling in general feels a lot better in Wayland. If you’d only tried it in X11 before, might want to see if Wayland works better for you.
First a caveat/warning - you’ll need a beefy GPU to run larger models, there are some smaller models that perform pretty well.
Adding a medium amount of extra information for you or anyone else that might want to get into running models locally
If you look at https://ollama.com/library?sort=featured you can see models
Model size is measured by parameter count. Generally higher parameter models are better (more “smart”, more accurate) but it’s very challenging/slow to run anything over 25b parameters on consumer GPUs. I tend to find 8-13b parameter models are a sort of sweet spot, the 1-4b parameter models are meant more for really low power devices, they’ll give you OK results for simple requests and summarizing, but they’re not going to wow you.
If you look at the ‘tags’ for the models listed below, you’ll see things like 8b-instruct-q8_0
or 8b-instruct-q4_0
. The q part refers to quantization, or shrinking/compressing a model and the number after that is roughly how aggressively it was compressed. Note the size of each tag and how the size reduces as the quantization gets more aggressive (smaller numbers). You can roughly think of this size number as “how much video ram do I need to run this model”. For me, I try to aim for q8 models, fp16 if they can run in my GPU. I wouldn’t try to use anything below q4 quantization, there seems to be a lot of quality loss below q4. Models can run partially or even fully on a CPU but that’s much slower. Ollama doesn’t yet support these new NPUs found in new laptops/processors, but work is happening there.
Shithole Country, Failed State. These are understating just how fucked up the US is right now.
The amount of harm Trump has done and will do to this country (in the name of trying to harm the entire rest of the world) is going to last for decades. And when he’s done, he’s so dumb that he will think he won bigly and his supporters will continue to repeat idiotic statements about how much better off the US is, and most of them will believe it.
This country is fucked.