• udon@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    Does it?

    Yes, in the sense that “thing moves around and does stuff” becomes more predictable if you assume a certain degree of consciousness. This is easier than “thing is at this position now, was at a different position before, was at yet another position before that”. You reduce some of the complexity and unpredictability by introducing an explanation for these changes of world state. In my world, so far I worked well with the assumption that other humans and animals have some consciousness and at least I’m not aware of any striking evidence that would raise doubt on that.

    The problem with this isn’t that it’s literally unprovable

    Yes, that’s a problem, but it’s relatively similar to the other one. It’s actually quite hard to “prove” anything with real world connection. However, in the case of other humans/animal consciousness, evidence suggests so (at least for me). The evidence in the case of “AI” is different, though. For example, they seem to have no awareness of time and no awareness of the world beyond the limited context of a conversation. Besides a fancy marketing term that suggests there is something similar to living beings involved, what we currently see are admittedly impressive programs that run on statistics, but I don’t need to assume any “consciousness” to explain what they do.

    • Jojo, Lady of the West@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      You reduce some of the complexity and unpredictability by introducing an explanation for these changes of world state

      My concern is that “consciousness” isn’t so much an explanation as it is a sort of heuristic. We feel conscious and have an internal experience, so it seems pretty reasonable to say that such a thing exists, but other than one’s own self, there’s no point where it is certain to exist, and no clear criteria or mechanism that we can point to.

      What about the p-zombie, the human person who just doesn’t have an internal experience and just had a set of rules, but acts like every other human? What about a cat, who apparently has a less complex internal experience, but seems to act like we’d expect if it has something like that? What about a tick, or a louse? What about a water bear? A tree? A paramecium? A bacteria? A computer program?

      There’s a continuum one could construct that includes all those things and ranks them by how similar their behaviors are to ours, and calls the things close to us conscious and the things farther away not, but the line is ever going to be fuzzy. There’s no categorical difference that separates one end of the spectrum from the other, it’s just about picking where to put the line.

      And yes, we have perhaps a better understanding of the mechanism behind how an ai gets from input to output than we do for a human, but it’s not quite a complete one. And the mechanism for how humans get from an input to an output is similarly partially understood. We can see how the arrangement and function of nerve cells in a “brain” lead to the behaviors we see, and have even fully simulated the brains of some organisms with machine code. This is not so dissimilar from how a computational neural network is operated. The categorical difference of “well one is a computer” doesn’t work when we have literally simulated an organic brain also on a computer.