

You can leave it.
You can leave it.
As long as you ran systemctl daemon-reload
, you should be able to try sleeping without needing to reboot.
It might be due to https://github.com/systemd/systemd/issues/33083.
Try disabling user session freezing when sleeping:
sudo systemctl edit systemd-suspend.service
Add the following to the file:
[Service]
Environment="SYSTEMD_SLEEP_FREEZE_USER_SESSIONS=false"
Reload systemd:
sudo systemctl daemon-reload
After that, try sleeping and waking again.
Apparently Framework did try to get AMD to use LPCAMM, but it just didn’t work from a signal integrity standpoint at the kind of speeds they need to run the memory at.
What filesystem are you using on the external drive? If it is NTFS or FAT, they won’t store permissions on the filesystem, which would explain why the owner/group changes are not persistent. To fix that, you can set the uid/gid on mount in your fstab.
/dev/mapper/YOUR_DRIVE /path/to/mnt <fstype> rw,uid=<jellyfin_uid>,gid=<jellyfin_gid>,dmask=0002,fmask=0113
Are you using the default bridge? I have a similar setup (with Traefik instead of NPM), and for each compose file am using separate networks for the internet, proxy, and backend services.
services:
some_service:
...
networks:
- frontend_network
- proxy_network
- backend_network
backend_service:
...
networks:
- backend_network
networks:
frontend_network:
driver: "bridge"
proxy_network:
driver: "bridge"
internal: true
backend_network:
driver: "bridge"
internal: true
All video codecs are lossy, meaning you will lose some quality. AV1 and H.265 are modern video codecs with the best quality to bitrate ratios, meaning you can get better quality for the same bitrate, or the same quality with a lower bitrate. The downside the these codecs is that they are very complex and computational expensive to do in software. You’ll want to make sure your GPU supports hardware encoding for the codecs you intend to record with. The reason most people will recommend AV1 over H.265 is that AV1 is royalty free. With H.265, companies have to pay a royalty to use H.265. Because of this, most companies (Netflix, YouTube, Facebook, Twitch, etc.) want to use AV1 going forward, meaning in the near future, it will probably be the dominant codec.
Then Linus responded pretty poorly (and ended up stepping down as CEO and is now a chief creative something or other iirc)
Linus didn’t step down in response to this. I don’t remember the exact timelines, but he either stepped down before this, or was in already in the process of transitioning to the new CEO when this happened.
No. Modern SSDs are quite sophisticated in how they handle wear leveling and are, for the most part, black boxes.
SSDs maintain a mapping of logical blocks (what your OS sees) to physical blocks (where the data is physically stored on the flash chips). For instance, when your computer writes to the logical block address 100, the SSD might map that to a physical block address of 200 (this is a very simplified). If you overwrite logical block address 100 again, the SSD might write to physical block address 300 and remap it, while not touching the data at physical block address 200. This let’s you avoid wearing out a particular part of the flash memory and instead spread the load out. It also means that someone could potentially rip the flash chips off the SSD, read them directly, and see data you thought was overwritten.
You can’t just overwrite the entire SSD either because most SSDs overprovision, e.g. physically have more storage than they report. This is for wear leveling and increased life span of the SSD. If you overwrite the entire SSD, there may be physical flash that was not being overwritten. You can try overwriting the drive multiple times, but because SSDs are black boxes, you can’t be 100% sure how it handles wear leveling and that all the data was actually overwritten.
The more bits per cell you store, the more dense and therefore cheaper your flash chips can be for a give capacity. The downside is that it is slower and less reliable since you have to be able to write and read exponentially more voltage states per cell, e.g. 2 states for SLC, 4 states for MLC, 8 states for TLC, etc.
the timer has no idea if it was triggered during last boot. It only has the context of “this” boot, so it will do it right after a reboot and set a timer to start the service again after a week of uptime.
This is not correct. Persistent=true
saves the last time the timer was run on disk. From the systemd.timer
man page:
Takes a boolean argument. If true, the time when the service unit was last triggered is stored on disk. When the timer is activated, the service unit is triggered immediately if it would have been triggered at least once during the time when the timer was inactive.
OP needs to remove Requires=backup.service
from the [
section so it stops running it when it start the timer on boot. ]
You have the timer requiring backup.service, so it will run that service every time the timer starts on boot. Remove Requires=backup.service
, and that will fix the issue.
Well, for one, it’s network attached storage. If it’s not present in the network for one reason or another, guess what, your OS doesn’t boot… or it errors during boot, depending on how the kernel was compiled and what switches your bootloader sends to the kernel during boot.
Just use nofail
in the fstab.
Second, this is an easy way for malware to spread, especially if it’s set to run after user logon.
If your fileshare is accessible to you, it is also accessible to malware running as your user. Mounting the share via a filemanager doesn’t change this.
USB 2 is 480 Mb/s, not 480 MB/s. 480 Mb/s is 60 MB/s, so the 500 MB/s from PCIe 2.0 x1 is quite a bit faster and is about the limit of what a SATA 3 interface could do. Also, sequential throughput isn’t nearly as important as most people think. Random IO, which NVMe drives excel at, will make a far more noticeable impact on real world performance.
I’ve been using PhotoPrism for the past couple of days and have really liked it.
I was considering Immich, but the rapid development cycle turned me off of it for now. I don’t want to have to deal with keeping up with patch notes and potential breaking changes. Immich also seems more focused on photo backups from your phone, which isn’t quite what I wanted. PhotoPrism just let me upload all my existing photos on the web ui.
I’d say give both a try. Both provide a docker-compose file, so you should be able to bring them up fairly quick.
I think the snapshot exists but is not mounted as a btrfs subvolume.
Is it not listed when you try running btrfs subvolume list .
? You might need to change the .
to a path that is on the array.
from the research I did, the @docker folder at the volume root holds all the volumes, images, subvolumes, etc. and I did copy that over.
Copying over the files wouldn’t be enough. You would actually need to create the subvolumes, e.g. btrfs subvolume create subvolume_name
.
Do you happen to know if I find the snapshot folder and download it, will there be anything recoverable? Or would it just be like, hashes and unintelligble stuff?
Unfortunately, I am not familiar enough with how Synology does things, but a btrfs snapshot will just appear as normal directory with the files/directories in it. If Synology isn’t using btrfs for the snapshoting, I’m not sure what you’ll find.
I’ll preface this by saying I am not familiar with Synology, but I am using Docker and BTRFS (which I am assuming is being used on your Synology NAS).
Do you have SSH access or the ability to get a shell on the NAS? If you do, you can try running btrfs subvolume list .
to see what subvolumes/snapshots are on your system. That will hopefully let you figure out where your data is. Once you narrow down where it is, you can try downloading it using an sftp client.
As an aside, the reason Docker threw a fit whenever you tried to update an image is that Docker was probably automatically using the BTRFS driver, which creates a new subvolume/snapshot for every image/layer. When you remove images, it would just remove all the subvolumes/snapshots. When you copied your files over, you probably didn’t remake the subvolumes. That would have caused issues when trying to remove images, or create new images/containers.
How are you passing the drives to the TrueNAS VM?
It’s your private key, but yes, you would need to keep it secret just like you would an SSH key.
The benefits of a VPN are that you don’t need to open ports up to the internet and rely on your individual services to be secure. Your VPN would authenticate users and ensure that the communication over the tunnel is encrypted (useful if you don’t want to set up SSL/https). They can also hide what services you are hosting or even hide the fact that you are even running a VPN.
Private keys are going to be far more secure than passwords since you really can’t brute force them in the same way you can passwords. Getting ahold of someone’s private key is probably going to be far more difficult than guessing their password. Even if an attacker were to get ahold of your private key, they would still need to contend with the security of your service, e.g. logging into it, which would be no worse than not having a VPN.
This really depends on your threat model. If you are only concerned about the drive getting stolen, or wanting to keep the data on it private if you need to RMA the drive, mounting it automatically on boot with a key stored on the rootfs can be perfectly fine. If you are a journalist in a hostile country and protecting your sources from state level actors is a matter of life and death, then yeah, this would be woefully insufficient.