steam hardware survey shows 17% AMD and 8% Intel
steam hardware survey shows 17% AMD and 8% Intel
The assumption was that nobody used win8 lol
But again I think people who grew up with 10/11 are more likely to use the windows store than you think. They used an iPad before they got a chromebook before they got a windows computer. My little cousins don’t play minecraft Java, they play minecraft bedrock. I don’t think they know what VLC is.
Ok fair, last time I used windows you had to install gpu drivers manually. I think you still are recommended to do so, since the windows ones are really old.
But yeah manual driver installation/specialized distros for Nvidia is a problem that’s in the process of getting fixed with NVK, Nova, and the official drivers. Intel and AMD are there already.
I would rather have one extra manual step like that than dealing with/paying for Windows 11
Yeah I think in the future, we’ll figure out how to make NixOS configuration modular enough to be viable for laymen, but Linux Mint works well enough for Windows refugees.
Linux mint has an app store like Windows, MacOS, iOS, and Android.
I think it supports flathub, which has every app you could need, but I haven’t checked since I run a very customized NixOS.
People don’t really download .exes anymore, it’s just people who are used to windows 7 and earlier who still do that.
Pre-installed Nvidia drivers will likely be fixed in the next two years, but:
B. The 25% of gamers not using Nvidia GPUs do not have driver issues on Linux
III. Windows has tons of driver issues, so I’m not sure why Linux Nvidia drivers are a significant detail here. We don’t expect little Jimmy to know to install drivers, and know what to do when windows update fucks your drivers randomly. Linux actually soves those issues for you.
That’s a weird way to spell Linux Mint
not to mention there are 48 and 64gb dimms out now too that work with basically all alder lake atoms
Yeah, what you’re talking about is called GitOps. Using git as the single source of truth for your infrastructure. I have this set up for my home servers.
nodes
has NixOS configuration for my 5 kubernetes servers and a script that builds a flash drive for each of them to use as a boot drive (same setup for porygonz
, but that’s my dedicated DHCP/DNS/NTP mini server)
mikrotik
has a dump of my Mikrotik router config and a script that deploys the config from the git repo.
applications
has all my kubernetes config: containers, proxies, load balancers, config files, certificate renewal, databases, clustered raid, etc. It’s all super automated. A pretty typical “operator” container to run in Kubernetes is ArgoCD, which watches a git repo and automatically deploys any changes or desyncs back to the Kubernetes API so it’s always in sync with git. I don’t use any GUI or console commands to deploy or update a container, I just edit git and commit.
The kubernetes cluster runs about 400 containers, most of them just automatic replicas of services for high-availability. Of course there’s always some manual setup steps outside of git, like partitioning drives, joining the nodes to the cluster, writing hardware-specific config, and bootstrapping Argocd to watch git. But overall, my house could burn down tomorrow and I would have everything I need to redeploy using this git repo, the secrets git repo, and my backups of my databases and container /data
dirs.
I think Portainer supports doing GitOps on Docker compose? Never used it.
https://docs.portainer.io/user/docker/stacks/add
Argocd is really the gold standard for GitOps though. I highly recommend trying out k3s on a server and running ArgoCD on it, it’s super easy to use.
https://argo-cd.readthedocs.io/en/stable/getting_started/
Kubernetes is definitely different than Docker Compose, and tutorials are usually written for Docker compose.yml
, not Kubernetes Deployments
, but It’s super powerful and automated. Very hard to crash once you have it running. I don’t think it’s as scary as a lot of people think, and you definitely don’t need more than one server to run it.
nah you’re probably not going to get any benefits from it. The best way to make your setup more maintainable is to start putting your compose/kubernetes configuration in git, if you’re not already.
Ah, no, Kopia uses a shared bucket.
Seems like a good way to do it.
Keep in mind Kopia has some weirdness when it comes to transferring repos between filesystem and S3, so you’d probably want to only keep one repo.
https://kopia.discourse.group/t/exported-s3-storage-backup/3560
Backblaze B2 is a cheap S3 provider. Hetzner storage box is even cheaper, but it doesn’t support S3 natively, so you’re likely to run into issues with the kopia repo compatibility I mentioned.
In terms of industrial applications, the abstract states
We have realized all-optical wavelength conversion for a more than 200-nm-wide wavelength span at 100 Gbit s−1 without amplifying the signal and idler waves. As the 32-GBd 16-QAM is the dominant modulation format of current optical-fibre communication systems connecting the continents on Earth, the Si3N4-chip high-efficiency wavelength conversion demonstrated has a bright future in the all-optical reconfiguration of global WDM optical networks by unlocking transmission beyond the C and L bands of optical fibres and increasing the capacity of optical neuromorphic computing for artificial intelligence.
From the abstract: “we obtained a continuous-wave gain bandwidth of 330 nm in the near-infrared regime. […] Furthermore, we realized wide all-optical wavelength conversion of single-wavelength signals beyond 100 Gbit s−1 without amplifying the signal and idler wave.”
Here is the paper: https://www.nature.com/articles/s41586-025-08824-3
I think figure 4 from the PDF shows it the best. Their amplifier covers 1400 nm to 1700 nm infrared lasers.
PHP does actually scale better than something like Lemmy which is written in rust
But sure, you can act like you know more than the Nextcloud devs
Isn’t Opencloud just extended Nextcloud? (Still PHP)
Also, nextcloud core components are written in Rust, the PHP just handles incoming requests.
https://nextcloud.com/blog/nextcloud-faster-than-ever-introducing-files-high-performance-back-end/
SFP is the modern standard for pluggable laser modules. RJ45 sfp modules exist, but only for 1G and 10G. There’s also DAC cables for sfp, but those are limited to 2-3m, and the point was to focus on the benefits of fiber. Maybe the economies of scale necessitate some modern silicon photonics like a fiber on package option, but then you have repairability issues.
The minimum bend radius is mostly because of complete internal reflection, fiber is very flexible, and it’s not really possible to break an armored fiber cable by hand. You do have to worry about dust on the ends, though.
Toslink is cool, but it’s a very low bandwidth standard, less than 1gbit. You need proper glass fiber and lasers for high bandwidth.
yeah, I guess tvs and receivers would come with active optical cables to make it simpler, but the main thing is that optical is much cheaper and faster than copper once you get the economies of scale down on the transceivers. 1 terabit over 100km, down a cable thinner than a USB cable, is no problem with the right lasers. Meanwhile, I have interference and patent issues at 0.02tbps on hdmi cables less than a meter long.
Plenty of cheap optical HDMI cables out there, but they have compatibility issues. It would be so much easier with standard mmf mpo or SMF lc cables.
apalrd did review a unique product recently that embeds a mmf transceiver into the existing HDMI for factor, though.
Imagine putting out a new high bandwidth cable standard in 2025 based on copper.
The sooner display and networking move to SFP, the better.
Isn’t cloning font legal though? As compared to copying floppies which is punishable by death?