• 0 Posts
  • 55 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle


  • Pumpkin Escobar@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    5
    ·
    2 months ago

    Having been a firefox user for a few years now, Screw Mozilla. What a mismanaged shit-show they’ve become.

    I get that browser development costs a ton, and that they’re in a shitty position. But to make this ode to stockholm syndrome blog post… what on EARTH?

    Best case, Chrome gets split off into a separate organization free of meddling and they can fund themselves with reasonable donations / investments. In reality, I’m sure Google and other advertising companies will try to get into it and buy the behavior they want, like special-interest groups in US politics.

    But if Chrome ended up under any organization with reasonable management who wasn’t completely beholden to advertisers, I’d switch back to Chrome pretty quickly (assuming the whole Manifest V2/V3 thing got un-fucked).








  • Apple computers ARE really well put together, maybe no other maker exactly as good. But I’d say the Microsoft Surface line is a similar quality. Razer too though they’re pretty expensive.

    Asus zephyrus laptops are pretty great build quality, close to Apple but without the same kind of pricing and markup gouging we get from Apple

    Im not an apple hater, they make some great stuff. My point above was just that they don’t have competition in the “I need a Mac” space so their hardware isn’t competitively priced. And their build quality is great, but not every laptop needs to be built like a tank with top of the line components.


  • It’s good, a lot of good work going on, what they already have is impressive and the development seems pretty active and progressing well.

    But if you’re buying a laptop to run Linux and don’t plan to use macOS, I really think there are a lot of better options out there (depending on what’s important to you). You’re going to pay the Apple premium price for a computer, and though apple computers are good hardware, they’re expensive and largely overpriced for small upgrades. Whatever price you find for a refurbished M2, take that money and go find a laptop known to be well supported on Linux, it’ll just be a better experience and you’ll probably get more for your money.

    I haven’t run Asahi in 6+ months but thunderbolt/usb4 wasn’t working when I last used it so I couldn’t use my usb dock. Video was OK but I think Audio was sketchy (don’t remember specifics). It’s stuff that will get fixed at some point but right now it feels like a handful of minor annoyances or inconveniences

    Even in 1-2 years when Asahi gets some updates and is in a better spot (I really do expect it to be) I still don’t think I’d lean towards a macbook with Asahi over something else if Linux is the only OS you’re going to run. Of course, if you’re looking to dabble with some iOS development or something else you need a mac for, but don’t want to live in MacOS, then Asahi’s a great option to get you back to Linux.



    • archinstall is one of the better/best distro installs around - it just does what it says it will and is pretty intuitive
    • LUKS encryption is easy to set up in archinstall - strongly recommend encrypting your root partition if you have anything remotely sensitive on your system
    • If you do use encryption but don’t like typing the unlock password every reboot, you can use tpm to unlock - yes, this is less secure than requiring the unlock password every time you reboot, but LUKS + TPM unlock is still MUCH better than an unencrypted drive just sitting there
    • sbctl is a good tool for secure boot - If you want to get more secure, locking down bios with an admin password, turning on secure boot, sbctl works really well and is pretty easy to use. I would suggest reading up to understand what it’s doing before just installing/configuring/using it
    • yay is a solid AUR helper / pacman wrapper

  • archinstall’s default btrfs layout has I think 4-5 separate subvolumes (I’m not running btrfs anymore so can’t check) but at the very least I remember it has:

    • /
    • /var
    • /home

    being separate subvolumes and mountpoints, you can just use a previous snapshot from 1 without rolling back others

    Related to the snapshotting stuff, timeshift-autosnap is pretty helpful, hooks into pacman and takes a snapshot before installing/updating packages.

    Personally I found btrfs and the snapshots helpful when starting to use arch, but now that I know how not to blow things up, it has been stable enough for me I just felt ext4 was easier.






  • Similar to previous reply about MATE with font size changes, I do that with plasma. I hadn’t seen plasma big screen you linked, I’ll definitely try that one out. I’ve wondered about https://en.m.wikipedia.org/wiki/Plasma_Mobile? Like these sort of niche projects don’t always get a lot of attention, if the bigscreen project doesn’t work out, I’d bet the plasma mobile project is fairly active and given the way it scales for displays might work really well on a tv

    Speaking of scaling since you mentioned it. I have noticed scaling in general feels a lot better in Wayland. If you’d only tried it in X11 before, might want to see if Wayland works better for you.


  • First a caveat/warning - you’ll need a beefy GPU to run larger models, there are some smaller models that perform pretty well.

    Adding a medium amount of extra information for you or anyone else that might want to get into running models locally

    Tools

    • Ollama - great app for downloading/managing/running models locally
    • OpenWebUI - A web app that provides a UI like the ChatGPT web app, but can use local models
    • continue.dev - A VS Code extension that can use ollama to give a github copilot-like AI assistant running against a local model (can also connect to Anthropic Claude, etc…)

    Models

    If you look at https://ollama.com/library?sort=featured you can see models

    Model size is measured by parameter count. Generally higher parameter models are better (more “smart”, more accurate) but it’s very challenging/slow to run anything over 25b parameters on consumer GPUs. I tend to find 8-13b parameter models are a sort of sweet spot, the 1-4b parameter models are meant more for really low power devices, they’ll give you OK results for simple requests and summarizing, but they’re not going to wow you.

    If you look at the ‘tags’ for the models listed below, you’ll see things like 8b-instruct-q8_0 or 8b-instruct-q4_0. The q part refers to quantization, or shrinking/compressing a model and the number after that is roughly how aggressively it was compressed. Note the size of each tag and how the size reduces as the quantization gets more aggressive (smaller numbers). You can roughly think of this size number as “how much video ram do I need to run this model”. For me, I try to aim for q8 models, fp16 if they can run in my GPU. I wouldn’t try to use anything below q4 quantization, there seems to be a lot of quality loss below q4. Models can run partially or even fully on a CPU but that’s much slower. Ollama doesn’t yet support these new NPUs found in new laptops/processors, but work is happening there.

    • Llama 3.1 - The 8b instruct model is pretty good, decent speed and good quality. This is a good “default” model to use
    • Llama 3.2 - This model was just released yesterday. I’m only seeing the 1b and 3b models right now. They’ve changed the 8b model to 11b, I’m assuming the 11b model is going to be my new goto when it’s available.
    • Deepseek Coder v2 - A great coding assistant model
    • Command-r - This is a more niche model, mainly useful for RAG. It’s only available in a 35b parameter model, so not all that feasible to run locally
    • Mistral small - A really good model, in the ballpark of Llama. I haven’t had quite as much luck with this as with Llama but it is good and I just saw that a new version was released 8 days ago, will need to check it out again