Do you think AI is, or could become, conscious?
I think AI might one day emulate consciousness to a high level of accuracy, but that wouldn’t mean it would actually be conscious.
This article mentions a Google engineer who “argued that AI chatbots could feel things and potentially suffer”. But surely in order to “feel things” you would need a nervous system right? When you feel pain from touching something very hot, it’s your nerves that are sending those pain signals to your brain… right?
A child may hallucinate, lie, misunderstand, etc, but we wouldn’t say the foundations of a complete adult are not there, and we wouldn’t assess the child as not conscious. I’m not saying that LLMs are conscious because they say so (they can be made to say anything), but rather that it’s difficult to be confident that humans possess some special spice of consciousness that LLMs do not, because we can also be convinced to say anything.
LLMs can reason (somewhat unreliably) with a fraction of a human brains compute power while running on hardware that was made for graphics processing. Maybe they are conscious, but only in some pathetically small way, which will only become evident when they scale up, like a child.