• 1 Post
  • 152 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • My ISP seems to use just normal DHCP for assigning addresses and honors re-use requests. The only times my IP addresses have changed has been I’ve changed the MAC or UUID that connects. I’ve been off-line for a week, come back, and been given the same address. Both IPv4 and v6.

    If one really wants their home systems to be publicly accessible, it’s easy enough to get a cheap vanity domain and point it at whatever address. rDNS won’t work, which would probably interfere with email, but most services don’t really need it. It’s a bit more complicated to detect when your IP changes and script a DNS update, but certainly do-able, if (like OP) one is hell bent on avoiding any off-site hardware.






  • It really depends on what your data is and how hard it would be to recreate. I keep a spare HD in a $40/year bank box & rotate it every 3 months. Most of the content is media - pictures, movies, music. Financial records would be annoying to recreate, but if there’s a big enough disaster to force me to go to the off-site backups, I think that’ll be the least of my troubles. Some data logging has a replica database on a VPS.

    My upload speed is terrible, so I don’t want to put a media library in the cloud. If I did any important daily content creation, I’d probably keep that mirrored offsite with rsync, but I feel like the spirit of an offsite backup is offline and asynchronous, so things like ransomware don’t destroy your backups, too.






  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldISO Selfhost
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Wonder if there’s an opportunity there. Some way to archive one’s self-hosted, public-facing content, either as a static VM or, like archive.org, just the static content of URLs. I’m imagining a service one’s heirs could contract to crawl the site, save it all somewhere, and take care of permanent maintenance, renewing domains, etc. Ought to be cheap enough to maintain the content; presumably low traffic in most cases. Set up an endowment-type fee structure to pay for perpetual domain reg.


  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldISO Selfhost
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    At least my descendants will own all my comments and posts.

    If you self-host, how much of that content disappear when your descendants shut down your instance?

    I used to host a bunch of academic data, but when I stopped working, there was no institutional support. Turned off the server and it all went away (still Wayback Machine archives). I mean, I don’t really care whether my social media presence outlives me, the experience just made me aware that personal pet projects are pretty sensitive to that person.




  • Back in the day, I set up a little cluster to run compute jobs. Configured some spare boxes to netboot off the head-node, figured out PBS (dunno what the trendy scheduler is these days), etc. Worked well enough for my use case - a bunch of individually light simulations with a wide array of starting conditions - and I didn’t even have to have HDs for every system.

    These days, with some smart switches, you could probably work up a system to power nodes on/off based on the scheduler demand.



  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldStarting to self host
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    If you’re already running Pihole, I’d look at other things to do with the Pi.

    https://www.adafruit.com/ has a bunch of sensors you can plug into the Pi, python libraries to make them work, and pretty good documentation/examples to get started. If you know a little python, it’s pretty easy to set up a simple web server just to poll those sensors and report their current values. Only slightly more complicated to set up cron jobs to log data to a database and a web page to make graphs.

    It’s pretty straightforward to put https://www.home-assistant.io/ in a docker on a Pi. If you have your own local sensors, it will keep track of them, but it can also track data from external sources, like weather & air quality. There a bunch of inexpensive smart plugs ($20-ish) that will let you turn stuff on/off on a schedule or in response to sensor data.

    IMO, Pi isn’t great for transport-intensive services like radarr or jellyfin, but, with a Usb HD/SSD might be an option.


  • Since this article is specifically about pm 2.5, I’m going to chime in and say I have a gas range with no extractor, and the only time my pm2.5 sensor picks anything up is when frying generates smoke and oil aerosols. That’s more a function of cooking temperature than fuel, and my induction hotplate will generate just as much.

    CO2? Definitely more with gas. Trace chemicals? Probably more with gas, but all the studies I’ve seen are just about running the cooktop, with no food, in a sealed room. Run the extraction hood or open a window when you cook - it’s not just heat source.


  • I’ve got one, just a 120V, home-use thing, but it gets far hotter, faster than on my stove. Tends to have a cool spot in the very center, maybe 3" diameter, unless you circulate the wok, and you can’t flame food by tossing it in the fire (which you can’t really do on a residential stove, either). It’s a decent approximation of a wok jet for home cooks.


  • That’s my point: fusion is just another heat source for making steam, and with these experimental reactors, they can’t be sure how much or for how long they will generate heat. Probably not even sure what a good geometry for transferring energy from the reaction mass to the water. You can’t build a turbine for a system that’s only going to run 20 minutes every three years, and you can’t replace that turbine just because the next test will have ten times the output.

    I mean, you could, but it would be stupid.