

But I want my own content bubble organized antithetical to my convenience and I want it now.
🌨️ 💻
But I want my own content bubble organized antithetical to my convenience and I want it now.
They’re most vulnerable and marginalized community. How dare you refuse to kiss their boots.
Or even phivetons.
I wish it was hypothetical. Two slightly awkward conversations prompted this.
Touché…is also the name of a fetish community.
Blueiris and some hikvision cameras. It’s not fancy, but it’s pretty straightforward to get running. I’m not super concerned with alerting and just run continuous recording looping after a few days.
up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋♀️. Especially if you get those reps sequentially it’s like hitting the juice 💉 for your transfer speeds.
Flail until the light leaves their eyes.
Comments like this is why I come to Red…Lemmy.
Imagine never hearing the word “No.” as a complete sentence ever again in your life.
But how will they know what movies to watch or what’s the latest in fashion?
get fucked chud I’m an apex baddie with hot takes coming out like hot snakes. they smell the same too.
Hey Ralph can you get that post-it from the bottom of your keyboard?
I dunno I RMA’d my Nomad so many times.
If budget is no object it’s only kind of a pain in the ass with Nvidia’s vGPU solutions for data centers. Even with $10 grand spent there’s hypervisor compatibility issues, license servers, compatibility challenges with drivers for games/consumer OS’s on hypervisors, and other inane garbage.
Consumer wise it’s technically the easiest it’s ever been with SRIOV support for hardware accelerating VMs on Intel 13 & 14 gen procs with iGPUs, however iGPU performance is kinda dogshit, drivers are wonky, and multiple display heads being passed through to VMs is weird for hypervisors.
On the docker side of things YMMV based on what you’re trying to accomplish. Technically nvidia container toolkit does support CUDA & display heads for containers: https://hub.docker.com/r/nvidia/vulkan/tags. I haven’t gotten it working yet, but this is the basis for my next set of experiments.
Are you running redundant routers, connections, ISPs…etc? Compromise is part of the design process. If you have resiliency requirements redundancy will help, but it ratchets up complexity and cost.
Security has the same kinds of compromises. I prefer to build security from the network up, leveraging tools like VLANs to start building the moat. Realistically, your reverse proxy is likely battle tested if it’s configured correctly and updated. It’ll probably be the most secure component in your stack. If that’s configured correctly and gets popped, half the Internet is already a wasteland.
If you’re running containers, yeah technically there are escape vectors, but again your attacker would need to pop the proxy software. It’d probably be way easier to go after the apps themselves.
Do something like this with NICs on each subnet:
DMZ VLAN <-> Proxy <-> Services VLAN
Double NIC on the proxy. One in each VLAN.
The Chinese version feels higher budget, but is unwatchable. Seems like the show runner told the writers “Anything science-y we explain 3 times, mandatory.” Heard them the first time? Too bad, get ready to hear the same 3-minute explanation again, now with more ‘I’ve heard of math, but never actually done it’ energy.