BrianPansky Wiki
Advertisement

I do not think that p-zombies are properly conceivable (for example, see this paper).  At least, not in the sense required to disprove physicalism from the armchair.  So where did the idea come from?

On this page I'll try to document ALL the mixups and fallacies that I see or suspect are occurring.

The conceivability of the conceivability of zombies

If you are trying to argue from the "conceivability of zombies", this is practically useless, kind of like saying "well maybe I'll be able to prove they are logically coherent, some day!"

But...alas this seems to be what the zombie argument fallaciously tries to rest on most frequently. 

That is, our ignorance allows us to easily imagine there could maybe be premises (starting with defining a p-zombie) in conjunction with the conclusion (that such a p-zombie is in fact completely logically coherent/conceivable).  We'd just need to find the appropriate premises in the middle, surely (something like "this is what conceivably means, and this is how you conceive of something, then this is how you conceive of p zombies, try it yourself and see", this is what is always missing from the zombie argument, and why it always fails). 

"Whoever says no such middle premises could ever possibly exist has the burden of proof on them!"

This is fine, as long as you're not trying to use this ignorance to attack physicalism or something.

The incompleteness of the physicalist theory of qualia

Similar to above, the answer to "how mind arises from non-mind" on physicalism is itself lacking those kinds of middle premises.  Science hasn't discovered exactly how it works yet in detail, after all.  (but note there is symmetry here, because we can once again put the burden on whoever claims no such middle premises could ever possibly exist)

We perceive qualia in others

Kind of.  It just seems obvious to us that the people around us feel and experience, and the rocks on the ground do not. 

We get the impression that they experience things. 

And when people see a robot, they usually don't "perceive" that the robot has experience (I predict this will change when androids become more advanced, for example Boston dynamics robots are already giving people different perceptions. For example, try watching this without feeling the physical exhertion).  And, since imagination consists of combining different perceptions, we can sort of imagine a human and also imagine not perceiving qualia in them. 

But note that this perception (or impression) is in the observer (or imaginer).  It might not be an accurate impression.  It could be a false belief. 

Same with when they imagine sophisticated robots, even if they have gotten past the difficulty in imagining how those robots would work (see the point about naturalism above, and my further notes below) their impression that this robot would not experience qualia could be a mistaken impression. 

But I think people confuse this impression for some kind of reliable knowledge.  And note that this impression does not amount to being able to conceive of p-zombies, it amounts to being able to conceive of believing (or having the impression) that a robot is a p-zombie.  Which is rather different.

The philosophy of "being identical"

An unrelated philosophical dispute over what it means for "two things" to be "the same one thing", perhaps just different views of it or different descriptions of the same thing.  And some people don't know how we can tell that they really are the same thing, rather than two different things. 

When that general philosophical problem is applied to this specific topic (minds and physicalism) those people naturally might not understand how a working physical brain can even in principle be the same thing as a subjective experience.

But this is similar to our common experience:

  • the redness of seeing an apple doesn't resemble how it feels to touch the apple, so how can they be sensing the same object?
  • similarly, "subjective states" appear different with one sense (experiencing that subjective state) than it appears with other senses (such as our eyes looking at the same brain activity, or imagining what neurons would look like), that does not mean they are not the same object, they clearly are by all indications of their function and causal connections to the matter located in space and time.

Similarity of Internal Function, or of External Behavior?

I had someone ask me basically about Turing Tests for chatbots.  Notably, that's external behavior, "black boxed".  And I really think what needs to be focussed on is internal functioning instead, because the zombie argument is about humans, and if our physical internal functioning (our brains) can produce experience.

This distinction between internal similarity and external similarity is also kinda like what I say below about "recordings" (which are externally similar, but internally have no similar function).

Also, this could link to basically what Richard Carrier says here in The Argument from Secified Complexity against Supernaturalism.  For example:

We can then easily explain why we ever think the supernatural exists, as simply an inevitable cognitive error. We can observe a mind think, or a car roll uphill, without seeing the engine inside actually bringing about the effect (the person’s brain; the car’s motor). So we can conceive of a mind thinking without a brain, and a car rolling uphill without an engine. But that’s an error. In fact, neither is possible: minds can’t exist without brains, nor can cars roll uphill without some complicated machine bringing that about. It is our inability to see the mechanism, and our seeing instead only a cause and an effect (without all the intervening messy system of causes—the neurons and blood vessels, the gears and pistons—linking the one to the other), that leads us to imagine the supernatural: a disembodied mind; a magical car. We don’t see the hidden mechanism; so we think we don’t need the hidden mechanism. But that’s a fallacy.

Retrieval-Based VS Generative chatbots

There is a tutorial for creating Deep-Learning Chatbots.  It outlined two different types of chatbot:

  • Retrieval-based models (easier) use a repository of predefined responses and some kind of heuristic to pick an appropriate response based on the input and context. The heuristic could be as simple as a rule-based expression match, or as complex as an ensemble of Machine Learning classifiers. These systems don’t generate any new text, they just pick a response from a fixed set.
  • Generative models (harder) don’t rely on pre-defined responses. They generate new responses from scratch. Generative models are typically based on Machine Translation techniques, but instead of translating from one language to another, we “translate” from an input to an output (response).

Notice that "retrieval" is similar to the tape-recording example that I mention below, except it first calculates which recording to activate.  Basically it plays a different recording depending on which "button" you push.  It might as well be a bunch of recordings, and you can choose which one you want to play.  In that case, the chatbot is doing no "thinking", you are.  It's purely feed-forward.

Generative, on the other hand, is closer to a real, fully-functioning mind.  At least in some ways, I think...

A Comment I made to someone once

Here's a post I wrote in a comment section once:

The thought experiment proposes that they would be able to convince everyone forever, because they would be behaviorally indistinguishable from regular people. But somehow the zombies wouldn’t have any subjective experience. The question is: is that possible?
I think it depends on what is inside them, making it work (it appears you do too, so hopefully you’ll agree with me). Obviously you think if something has the insides you do, it is like you.
If inside, there was just a recording that took no inputs from the outside world and just outputted muscle movements etc, this would not be conscious or experience anything. It would be no different from any tape recording (you could also compare it to cartoons on a screen, which do fool our social perceptions). In order for this recording to fool anyone interacting with it, the creator of it would need precise knowledge of the future, so this is incredibly difficult to accomplish!!! Probably even literally impossible for any significant length of time. By the way, this would be an example of a mechanism using “feed-forward control”.
But what about feedback control? As I said before: the human nervous system fits the model of a feedback control. There’s inputs, memory and learning, pattern recognition, connection between related things, comparisons, and selection of outputs based on goals. Something like that. And it all happens in real-time, it’s not merely a recording set up beforehand.
I think intuition can still fail here. Because everything physical is ultimately more like unintelligent feed-forward control when you look at the small parts. So the difficulty for the untrained imagination is figuring out how one becomes the other when you build it (and indeed, how it could become a seemingly whole and undivided experience like ours). Imagination and intuition can also fail to understand that an artificial intelligence wouldn’t be like a recording, it would have to use feedback (and thus goals and comparisons) in an extremely detailed fashion. Pattern recognition, semantic relations/connections, lots of stuff. Everything would have to come together to make that coordinated wholeness of intelligent behavior.
Some good further reading:
http://holtz.org/Library/Philosophy/Epistemology/Zombie%20conceivability%20-%20Cottrell.htm


Advertisement