Do you think AI is, or could become, conscious?

I think AI might one day emulate consciousness to a high level of accuracy, but that wouldn’t mean it would actually be conscious.

This article mentions a Google engineer who “argued that AI chatbots could feel things and potentially suffer”. But surely in order to “feel things” you would need a nervous system right? When you feel pain from touching something very hot, it’s your nerves that are sending those pain signals to your brain… right?

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    6 days ago

    I don’t expect current ai are really configured in such a way that they suffer or exhibit more than rudimentary self awareness. But, it’d be very unfortunate to be a sentient, conscious ai in the near future, and to be denied fundinental rights because your thinking is done “on silicone” rather than on meat.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      I said on paper. They are just algorithms. When silicon can emulate meat, it’s probably time to reevaluate that.

      • amelia@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 days ago

        You talk like you know what the requirements for consciousness are. How do you know? As far as I know that’s an unsolved philosophical and scientific problem. We don’t even know what consciousness really is in the first place. It could just be an illusion.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 days ago

          I have a set of attributes that I associate with consciousness. We can disagree in part, but if your definition is so broad as to include math formulas there isn’t even common ground for us to discuss them.

          If you want to say contemplation/awareness of self isn’t part of it then I guess I’m not very precious about it the way I would be over a human-like perception of self, then fine people can debate what ethical obligations we have to an ant-like consciousness when we can achieve even that, but we aren’t there yet. LLMs are nothing but a process of transforming input to output. I think consciousness requires rather more than that or we wind up with erosion being considered a candidate for consciousness.

          So I’m not the authority, but if we don’t adhere to some reasonable layman’s definition it quickly gets into weird wankery that I don’t see any value in exploring.