• bampop@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Well creating the slide was a form of cognitive offloading, but barely you still had to know how to use and what formula to use. Moving to the pocket calculator just change how you the it didn’t really increase how much thinking we off loaded.

      but this is something different. With infinite content algorithms just making the next choice of what we watch amd people now blindly trusting whatever llm say. Now we are offloading not just a comolex task like sqrt of 55, but “what do i want to watch”, “how do i know this true”.

      • bampop@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I agree that it’s on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don’t benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it’s up to us to find the best way to live with these pressures, and engage with this technology on our own terms.

        • joel_feila@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          how we might live healthily with AI, to get it to do what we don’t benefit from doing,

          Agree that is oir goal, but one i don’t ai with paying for training data. Also amd this the biggest. What benefits me is not what benefits the people owning the ai models

          • bampop@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            What benefits me is not what benefits the people owning the ai models

            Yep, that right there is the problem

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      The article agrees with you, it’s just a caution against over-use. LLMs are great for many tasks, just make sure you’re not short-changing yourself. I use them to automate annoying tasks, and I avoid them when I need to actually learn something.