I am owned by several dogs and cats. I have been playing non-computer roleplaying games for almost five decades. I am interested in all kinds of gadgets, particularly multitools, knives, flashlights, and pens.

  • 0 Posts
  • 133 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle




  • The ratio of poor to ultra wealthy is far greater than a million to one. Other than that, the only practical reason they have for not doing it is that they still need human labor for most of what they do. That isn’t going to change anytime soon, despite AI. However, they don’t need their labor force to be free or happy, which is why the US is on the cusp of a fascist takeover.

    The rule of law has largely stopped mattering to the ultra wealthy. It may occasionally inconvenience them, but they know it will never affect them in any personal way.

    Not all of the ultra wealthy are socipaths. Unfortunately, terminal-stage capitalism does a surprisingly good job of selecting for sociopathy at the very top of the hierarchy. Becoming that rich requires both a strong belief that you deserve it and a disregard for how acquiring it harms others.



  • One of the many things I like about Subaru is that they seem to move useful features from optional to standard, once they’ve had a chance to prove themselves. I bought an Outback in 2016 and paid extra for the EyeSight safety system. Two years later that car was destroyed in an accident (I was T-boned and rolled over twice, without anyone being hurt). I bought another Outback to replace it, but by that time the EyeSight was a standard feature. Subaru now includes EyeSight on all their cars because it saves lives.

    They had done similar things with other safety features. Four-wheel disc brakes, anti-lock braking, and all-wheel drive became standard on Sabarus relatively early.

    It is also worth noting that the more intrusive EyeSight features, like lane assist, are easy to turn off. There’s a button on the steering wheel for that one. Even if you turn it off, the car will still warn you if you start to cross lanes without using your turn signals, but it will not adjust for you.


  • Curious Canid@lemmy.catoTechnology@lemmy.worldThe Copilot Delusion
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    1 month ago

    It amazes me how often I see the argument that people react this way to all tech. To some extent that’s true, but it assumes that all tech turns out to be useful. History is littered with technologies that either didn’t work or didn’t turn out to serve any real purpose. This is why we’re all riding around in giant mono-wheel vehicles and Segways.


  • Curious Canid@lemmy.catoTechnology@lemmy.worldWhy I don't use AI in 2025
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    2 months ago

    And a great many tools have a brief period of excitement before people realize they aren’t actually all that useful. (“The Segway will change the way everyone travels!”) There are aspects of limited AI that are quite useful. There are other aspects that are counter-productive at the current level of capability. Marketing hype is pushing anything with AI in the name, but it will all settle out eventually. When it does, a lot of people will have wasted a lot of time, and caused some real damage, by relying on the parts that are not yet practical.



  • Curious Canid@lemmy.catoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    3 months ago

    An LLM does not write code. It cobbles together bits and pieces of existing code. Some developers do that too, but the decent ones look at existing code to learn new principles and then apply them. An LLM can’t do that. If human developers have not already written code that solves your problem, an LLM cannot solve your problem.

    The difference between a weak developer and an LLM is that the LLM can plagiarize from a much larger code base and do it much more quickly.

    A lot of coding really is just rehashing existing solutions. LLMs could be useful for that, but a lot of what you get is going to contain errors. Worse yet, LLMs tend to “learn” how to cheat at their tasks. The code they generate often has lot of exception handling built in to hide the failures. That makes testing and debugging more difficult and time-consuming. And it gets really dangerous if you also rely on an LLM to generate your tests.

    The software industry has already evolved to favor speed over quality. LLM generated code may be the next logical step. That does not make it a good one. Buggy software in many areas, such as banking and finance, can destroy lies. Buggy software in medical applications can kill people. It would be good if we could avoid that.





  • This would be more impressive if Waymos were fully self-driving. They aren’t. They depend on remote “navigators” to make many of their most critical decisions. Those “navigators” may or may not be directly controlling the car, but things do not work without them.

    When we have automated cars that do not actually rely on human being we will have something to talk about.

    It’s also worth noting that the human “navigators” are almost always poorly paid workers in third-world countries. The system will only scale if there are enough desperate poor people. Otherwise it quickly become too expensive.