• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: July 27th, 2023

help-circle
  • I’m not sure how familiar you are with computers in general, but I think the best way to explain Docker is to explain the problem it’s looking to solve. I’ll try and keep it simple.

    Imagine you have a computer program. It could be any program; the details aren’t important. What is important, though, is that the program runs perfectly fine on your computer, but constantly errors or crashes on your friend’s computer.

    Reproducibility is really important in computing, especially if you’re the one actually programming the software. You have to be certain that your software is stable enough for other people to run without issues.

    Docker helps massively simplify this dilemma by running the program inside a ‘container’, which is basically a way to run the same exact program, with the same exact operating system and ‘system components’ installed (if you’re more tech savvy, this would be packages, libraries, dependencies, etc.), so that your program will be able to run on (best-case scenario) as many different computers as possible. You wouldn’t have to worry about if your friend forgot to install some specific system component to get the program running, because Docker handles it for you. There is nuance here of course, like CPU architecture, but for the most part, Docker solves this ‘reproducibility’ problem.

    Docker is also nice when it comes to simply compiling the software in addition to running it. You might have a program that requires 30 different steps to compile, and messing up even one step means that the program won’t compile. And then you’d run into the same exact problem where it compiles on your machine, but not your friend’s. Docker can also help solve this problem. Not only can it dumb down a 30-step process into 1 or 2 commands for your friend to run, but it makes compiling the code much less prone to failure. This is usually what the Dockerfile accomplishes, if you ever happen to see those out in the wild in all sorts of software.

    Also, since Docker puts things in ‘containers’, it also limits what resources that program can access on your machine (but this can be very useful). You can set it so that all the files it creates are saved inside the container and don’t affect your ‘host’ computer. Or maybe you only want to give permission to a few very specific files. Maybe you want to do something like share your computer’s timezone with a Docker container, or prevent your Docker containers from being directly exposed to the internet.

    There’s plenty of other things that make Docker useful, but I’d say those are the most important ones–reproducibility, ease of setup, containerization, and configurable permissions.

    One last thing–Docker is comparable to something like a virtual machine, but the reason why you’d want to use Docker over a virtual machine is much less resource overhead. A VM might require you to allocate gigabytes of memory, multiple CPU cores, even a GPU, but Docker is designed to be much more lightweight in comparison.


    • ALWAYS avoid partial upgrades, lest you end up bricking your system: https://wiki.archlinux.org/title/System_maintenance#Partial_upgrades_are_unsupported
    • The Arch Wiki is your best friend. You can also use it offline, take a look at wikiman: https://github.com/filiparag/wikiman
    • It doesn’t hurt to have the LTS kernel installed as a backup option (assuming you use the standard kernel as your chosen default) in case you update to a newer kernel version and a driver here or there breaks. It’s happened to me on Arch a few times. One of them completely borked my internet connection, the other one would freeze any game I played via WINE/Proton because I didn’t have resize BAR enabled in the BIOS. Sometimes switching to the LTS kernel can get around these temporary hiccups, at least until the maintainers fix those issues in the next kernel version.
    • The AUR is not vetted as much as the main package repositories, as it’s mostly community-made packages. Don’t install AUR packages you don’t 100% trust. Always check the PKGBUILD if you’re paranoid.







  • I would try what the other commenter here said first. If that doesn’t fix your issue, I would try using the Forge version of WebUI (a fork of that WebUI with various memory optimizations, native extensions and other features): https://github.com/lllyasviel/stable-diffusion-webui-forge. This is what I personally use.

    I use a 6000-series GPU instead of a 7000-series one, so the setup may be slightly different for you, but I’ll walk you through what I did for my Arch setup.

    Me personally, I skipped that Wiki section on AMD GPUs entirely and it seems the WebUI still respects and utilizes my GPU just fine. Simply running the webui.sh file will do most of the heavy lifting for you (you can see in the webui.sh file that it uses specific configurations and ROCm versions for different AMD GPU series like Navi 2 and 3)

    1. Git clone that repo, git clone https://github.com/lllyasviel/stable-diffusion-webui-forge stable-diffusion-webui (the stable-diffusion-webui directory name is important, webui.sh’s script seems to reference that directory name specifically)
    2. From my experience it seems webui.sh and webui-user.sh are in the wrong spot, make symlinks to them so the symlinks are at the same level as the stable-diffusion-webui directory you created: ln stable-diffusion-webui/webui.sh webui.sh (ditto for webui-user.sh)
    3. Edit the webui-user.sh file. You don’t really have to change much in here, but I would recommend export COMMANDLINE_ARGS="--theme dark" if you want to save your eyes from burning.
    4. Here’s where things get a bit tricky: You will have to install Python 3.10, there is warnings that newer versions of Python will not work. I tried running the script with Python 3.12 and it failed trying to grab specific pip dependencies. I use the AUR for this; use yay -S python310 or paru -S python310 or whatever method you use to install packages from the AUR. Once you do that, edit webui-user.sh so that python_cmd looks like this: python_cmd="python3.10"
    5. Run the webui.sh file: chmod u+x webui.sh, then ./webui.sh
    6. Setup will take a while, it has to download and install all dependencies (including a model checkpoint, which is multiple gigabytes in size). If you notice it errors out at some points, try deleting the entire venv directory from within the stable-diffusion-webui directory and running the script again. This actually worked in my case, not really sure what went wrong…
    7. After a while, the webUI will launch. If it doesn’t automatically open your browser, then you can check the console for the URL, it’s usually http://127.0.0.1:7860. Select the proper checkpoint in the top left, write down a test prompt and hopefully it should be pretty speedy, considering your GPU.

  • Yes, I torrent on the same machine where all my personal stuff is. The biggest reason for this is that I don’t have a dedicated machine to torrent 24/7, though I’d definitely like to set that up at some point. I like being able to seed niche torrents to those who need them, and a machine seeding 24/7 would definitely help with that. Also having easy simple access to the downloaded files is always a plus, but there’s a myriad of ways to do this over a local network (pretty sure some torrenting clients even have an option to torrent over LAN).

    My torrent client is bound to my VPN’s network interface, and my VPN has a killswitch as well, so I’m not paranoid that things will suddenly leak. Been running this setup for months now without issues.