Philosophy of Mind

Cartesian dualism

Substance dualism:
There exist an essential separation between mind and body.
The mind is an immaterial substance attached in a special way to a bodily machine.
Radical thesis, going against philosophical (Aristotelian) tradition that:
there is an essential connection between mentality and life
minded creatures are subclass of living creatures: only living, biological kinds/things made of organic matter can have minds (like ours).

Descartes' arguments on Animals

Argument that animals lack thought:
"language sign is the only sure sign of rational thought and no animals other than humans use language" (in Crawford 2010, p. 2).
We simply cannot know as "the human mind does not reach into [the beasts'] hearts" (Descartes 1985, vol. III, p. 365).
Only language users are capable of having real thought.
Do you agree? Consider non-verbal communication or planning
behaviour of animals.
Argument that animals lack sensation:
Sensations are judgmental; having a sensation of pain
is also to be judging, being aware or knowing that we are
in pain. Sensory experience is a mental process.
Judging/knowing belongs to the mind.
Animals don't have minds
So, animals don't feel pain.
Do you agree with this logic?

Descartes on mechanisms

Not only rejecting the connection between life and mind, but also rejecting the separation between life and mechanism
= Living and non-living things operate the same way: mechanistically. We can use mechanical principles to explain the workings of the body.
There is no reason to postulate vegetative, sensitive or locomotive soul or any 'principle of life' to explain the functioning of human body. All physical nature is one big mechanism operating according to mathematically expressible laws.

Substance Dualism argument cont.: mechanical body (matter) vs. immaterial soul (mind)

There is one power only that cannot be explained by mechanical principles: mind/thought/soul
Everything else can be explained by mechanical principles.
Therefore, the soul must be made of 'different stuff', a.k.a. immaterial substance.
Since bodies do not think (but the soul),
And only bodies can be alive,
There is no essential connection between thought and life.
(That is how the soul can live beyond the body = Catholic doctrine).

Why does Descartes think it is possible for the mind to exist without the body?

Argument form Doubt, or "Cogito Ergo Sum"
P1. I can doubt that I have a body.
P2. The only thing I cannot doubt is the doubting itself.
C: Doubting (thinking) is real.
P3: What truly belongs to me (what makes my essence) is something I cannot doubt.
P1: I can doubt that I have a body
C: the body does not truly belong to me/my essence. It's a 'temporary house' for my soul.

Connection between mind and body after all?

However, acc. to Descartes, there is no reason that the immaterial soul could not be united with a machine.
Pineal gland: the magic meeting point in the brain
It mediates between bodily sensations and immaterial thought
"The path of burning pain".

Ryle: Logical Behaviourism

Mental ascriptions simply mean things about behavioural responses to the environment
E.g., to say 'Edmund is in pain" means not something about Edmund's inner life or an episode taking place 'within' Edmund, but that Edmund is either actually behaving in a pain-like way (wincing, groaning) or is disposed to behave like that (he would behave this way if something wasn't stopping him, e.g., trying to look tough).
This is not to say that the mental is the behaviour

The Ghost in the Machine (GITM)

Targeting the official doctrine, which says that:
1) There are two worlds (physical and mental).
2) First is composed of matter, second of consciousness/mind.
3) Matter is located in space (bodies), the mind is not.
4) Since matter is located in space, bodies (parcels of matter) can be observed and so they are public objects. What happens to my body can be seen. Minds, however, are not in space, so their workings are not observable by the others. Minds, therefore, are private and can only be accessed by introspection.
5) Bodies are subject to mechanical laws; minds are not.
6) Mind is 'inner' while the body is 'outer'
Problem with 6? How can mind be 'inner' if it is not in space? Then it is not 'inside' the body, because it cannot be inside anything; it does not occupy any space.
Suggestion: Inner/outer description is thus metaphorical.
"The mind is its own place and in his inner life each of us lives the life of a ghostly Robinson Crusoe. People can see, hear and jolt one another's bodies, but they are irremediably blind and deaf to the workings of one another's minds and inoperative upon them" (Ryle 1949).

GITM leads to the 'Problem of other minds'

Criticism of the official doctrine: it gives rise to the problem of other minds
If you can only observe my body and its behaviour then how do you know what is going on in my mind? How do you know I have a mind at all?
Since the private 'inner' world is not accessible to 'external' observers, can you establish that lunatic behaviour is correlated with true mental lunacy, or others' pain behaviour is correlated with painful feelings, as opposed to something else (e.g., ticklish feelings)?

GITM leads to 'Privileged access'

Only we have access to our own minds
Each minds have largely perfect, indubitable knowledge of its own states and processes, or 'privileged access'
Our minds have a special kind of perception called introspection. We can observe the passing stream of of our consciousness (in some non-optical way)
Cartesian Theatre:

Category mistakes

Category mistake = mistake of assigning something to a category to which it does not belong, or misrepresenting the category to which something belongs.
E.g.1: To think that since we can lose our tempers, and lose our wallets, tempers and wallets belong to the same category of 'things' would be to commit a category mistake.
E.g.2: Saying "Number two is furious

Cartesian category mistake

-is to think that the mind is not an entity additional to the body, and that mental phenomena are not things over and above bodily phenomena.
"Minds are things, but different sorts of things from bodies; mental processes are causes and effects, but different sorts of causes and effects from bodily movements" (Ryle 1949).

Looking for the University example

...

Ryle's concept of mind

The mind is the capacity or ability to engage in various kinds of outward behaviour, all of which is public and observable by others.
Mental states are dispositions to behave in certain ways.
Dispositions are tendencies to exhibit or manifest something in certain kinds of circumstances; they are not the manifestations themselves.
An object can have a disposition even if it never manifests itself. Solubility of aspirin in water, breakability of glass when knocked off the table
Similarly, believing it will rain is just to
(be ready to) take an umbrella with you when
you see it is raining
No extra spooky stuff!

Rylean solution to the problem of other minds

Descartes: mind is private and behaviour is public.
Only contingent connection between mental states and behaviour (mental states cause behaviour).
Ryle: Mind is to be disposed to behave in a certain way = necessary connection between mind and behaviour, as they are close to the same thing.
Descartes: cannot know others' minds (only my own)
Ryle: I can know others' minds; in seeing their behaviour I see their mind in action.

Challenges to behaviourism/ problems for Ryle

Behaviourism (of any kind) is leaving out something real and important:
1) Anyone who is not anaesthetised knows that he/she experiences and can introspect actual inner mental states or occurrences (there is something it is like)
2) That need not be accompanied by actual behaviour (we can pretend)
3) It seems possible for two people to differ psychologically despite total similarity of their actual behaviour, so I cannot know their mind
4) Behavioural analyses of mental ascriptions seem adequate only so long as one makes substantive assumptions about the rest of the subject's mentality.

Physicalism - definition

Physicalism (Materialism): theory of mind that holds that all mental phenomena are states of the body
What Armstrong accepts about behaviourism: individual mental states are logically tied to behvaiour
What he doesn't accept: individual mental states are identical to behaviour or dispositions to behave
Proposal: mental states are essentially causes of behaviour.
E.g., what we mean by the mind, mental, belief, desire, pain is that it is a cause of behaviour

Comparison to Cartesian Dualism

Descartes:
Connection between mind and behaviour is contingent (dependent, accidental).
Consciousness is self sufficient and independent of bodily behaviour. The mental can cause behaviour, but it doesn't have to.
Armstrong:
Connection between the physical and the mental is necessary.
It is the nature or essence of the mental to cause (or be caused by) behaviour.
E.g., for something to be poison, it must cause organisms to become sick or die
E.g. 2, for something to be a fossil, it must have been caused by once-living organism

Identity Theory

Armstrong's definition of the mental: "a state of the person apt for producing certain ranges of behaviour" (1981) = the mental as the cause of the behaviour
How does it fit with Physicalism? (= all mental states are states of the body)
Science tells us that what causes behaviour are physical states of the body: central nervous system.
What causes the behaviour is the mental
So the CNS is identical to the mental

Types v Tokens

Armstrong's identity theory is a 'type identity theory': It identifies types of mental states with types of physical states.
Types = kinds of things
Tokens = instances of types
E.g., type: dogs; tokens: your dog, my dog, that dog, Napoleon's dog, Lassie.
E.g.2, mental type: thought; mental token: your thought, my thought
Same type, different tokens

Type Identity Theory (Armstrong)

Mental type 'pain' is identical with certain physical type (c-fibre simulation).
Just like type 'water' is identical with certain chemical type (H2O).
Every token of the type pain (my pain, your pain, his pain) is token of the same physical type (c-fibre simulation) but of different token c-fibre simulation in each case.
Just like all tokens (samples) of pure water in the universe is identical with a sample of type H2O.

Compare: Token Identity Theory

Token Identity Theory: each token mental state or event is identical with a token physical state or event.
What follows?
My token pain and your token pain need not be tokens of the same type pain; the types of pain could be different.
E.g., my pain is identical to c-fibre simulation in my brain, your pain is identical to y-fibre simulation in your brain.
So all pain need not be one and the same single physical state.
Which view is better?

Armstrong's method: 2 stages

Stage 2: Empirical stage - science discovers a posteriori (through empirical investigation) physical states and processes.
Stage 1: Conceptual stage - we find out using logic (or assume) a priori (independently of experience) that mental states are states apt for causing behaviour.
Putting stage 1 and stage 2 together = view that the mental must be identical to the physical.

Caveat

Making use of empirical data only!
Gathering information from people through surveys (usually intuitions of regular folk) in order to inform philosophical questions
Are intuitions good enough?
Are intuitions of non-philosophers good enough

Problems with physicalism

Problem 1: Why focus on causation?
Is the starting definition good one? Are all mental states "essentially states apt for causing behaviour"?
Ryle: they are not causes
Descartes: they are contingent causes
Strawson (1994): they are causes of other mental states, but not of behaviour
We need to challenge this assumption or reasons that led Armstrong to believe it is a good starting point
Problem 2: The painfulness of pain
Is the causation of pain all there is to pain?
Quality and experience of painfulness that identity theory does not seem to capture
Thought experiment: automata (or zombies):
Physical duplicate of me
Its internal states (c-fibre simulation) causes pain behaviour
But still it does not feel pain
The experiential quality seems to be logically independent of the causal relations pain states bear to behaviour
Problem 3: Multiple realizability
Armstrong: Pain is identical with c-fibre simulation.
What if there are no other animals who have c-fibres?
Then, those other animals would not feel pain.
Armstrong: Belief "Elvis lives" is identical with, e.g., e-fibre simulation.
How likely is it that everyone who believes that "Elvis lives" has the same type of brain configuration with e-fibres?
Objection from multiple realizability: mental states might be realized in diverse creatures in multiple ways.
Example of pain: Pain might be realized in humans by c-fibres, and it might be realized in octopi by something else.
Saying that only creatures who have c-fibres firing can feel pain is indefensible.
Question: is it still pain

Functionalism - definition

The mind is a system of mental states.
The essence of the mental is not the kind of stuff it is made of
Consciousness (Cartesian Dualism)
Behaviour and dispositions (Rylean Behaviourism)
Neural activity (Armstrong's Identity Theory)
but the functional role it plays in the cognitive system of an individual.
Functionalism (...) recognizes the possibility that systems as diverse as human beings, calculating machines and disembodied spirits could all have mental states. In the functionalist view the psychology of a system depends not on the stuff it is made of (living cells, mental or spiritual energy) but on how the stuff is put together" (Fodor 1981, p. 114).
Role-Realizer distinction:
Functionalism: metal states are functional states that play certain causal roles that are capable of multiple realization in a variety of different media.
Role - a function something plays
Realizer - that which fulfills the function, or brings it into reality
Example: Clocks
Clock - functional definition - "something that tells time"
Multiple realizability:
Grandfather clocks
Analogue watches
Digital watches
Sundials
Different materials:
Metal
Plastic
Wood

Similarly, Pain

What is important is not that the c-fibres are firing, but that their firing contributes to the operation of the organism as a whole.
To be in pain is to be in some state or other (of whatever biochemical description) that plays the same causal role as do the firings of c-fibres in the human beings.
Input: pain is caused by bodily damage
Output: pain causes behaviour to
relieve from pain
Mediation: sensory input causes the
belief that one is in pain and the desire to
get rid of pain (mental state causation)
that causes the behavioural output
P.s., One can be a functionalist and not be a materialist!
Functionalists are usually materialists (they think that mental states are in fact based on material medium), but they don't have to be. Consider pain as an example:
Identity Theorist: Science tells us that what realizes the pain-role in humans is firing C-fibers, so being in pain is just having firing C-fibers. Creatures without C-fibers can not be in that state.
Functionalist: Pain is the state of having the pain-role played by some internal state or other. Having firing C-fibers is but one way to do this, but there could be another way. Creatures without C-fibers can also be in this role state.
Putnam (1975): "the functional state hypothesis is not incompatible with dualism!" (p. 436).

Putnam: Machine Functionalism

Putnam compared mental states to the functional or logical states of the computer.
To be in a state 'M' is to be in some physiological state or other that plays role 'R' in the relevant computer program
Computer programs mediate between the inputs and outputs
The physiological state plays role 'R' in that it stands in a set of relations to physical inputs, outputs and other inner states that matches one-to-one the abstract input/output/logical state relations codified in the computer program

(Chronic) Problems with Functionalism

We have still not answered:
How it is that pain feels a certain way? (phenomenal character)
Propositional attitudes represent certain states of affairs: beliefs and desires are about something, they have content. How can a purely physical entity or state have the property of being about something (that is not there at the time)?

The Computational Theory of Mind (CTM)

The prevailing view in philosophy, psychology and artificial intelligence is one which emphasizes the analogies between the functioning of the human brain and the functioning of digital computers. According to the most extreme version of this view, the brain is just a digital computer and the mind is just a computer program" (Searle 1984, p. 28).
Computationalism is a specific form of cognitivism that argues that mental activity is computational, that is, that the mind operates by performing purely formal operations on symbols, like a Turing machine.
Compares mind to how computers function: storing and manipulating symbols.
Mental states are defined by their causal roles; the causal roles are implemented by computational processes.
Two key parts to CTM
Representation: postulates inner semantic symbols which are mental representations with contents (e.g., LOT)
Computation: Syntactic properties. Equates thinking with formal manipulation of inner symbols.

Like Turing machines

Analogy with Turing machines: Any creature with a mind can be regarded as a Turing machine, whose operation can be fully specified by a set of instructions (a "machine table" or program) operating on abstract symbols.
A Turing machine is an idealised computing device consisting of a read/write head (or 'scanner') with a paper tape passing through it. The tape is divided into squares, each square bearing a single symbol--'0' or '1', for example.
To machine functionalists the proper model for the mind and its operations is that of making a probability: the program specifies, for each state and set of inputs, the probability with which the machine will enter some subsequent state and produce some particular output.
=Not just describe, but make predictions

Artificial Intelligence (AI)

The project of getting computer machines to perform tasks that would be taken to demand human intelligence and judgment.
Can computers think?
1: What intelligent tasks will any computer perform?
2: Given 1, does it do like like humans do?
3: Given 1 an 2: Does it show that it has psychology and mental states?

Strong AI

Strong thesis: computers can be programmed to think, and human minds are computers that have been programmed by 'biological hardwiring' and experience.
Strong AI: correctly written program running on a machine actually is a mind
"mind is to the brain as the program is to the computer hardware (...) On this view, any physical system whatever that had the right program with the right inputs and outputs would have a mind in exactly the same sense as you and I have minds" (Searle 1984, p. 28).
Suggestion: Computers could be intelligent and have a conscious mind just as ours. They should be able to reason, solve puzzles, make judgments, plan, learn, communicate, etc.

Weak AI

machine running a program is at most only capable of simulating real human behavior and consciousness.
Machines can act 'as if' they were intelligent.

Limits of CTM

How many aspects of mind can it account for?
1. Reasoning: keeping a 'rational relation' in sync with causal relation. But when does a mental process count as reasoning? Three types of theoretical reasoning:
A) Deductive: conclusion is logically entailed in the premises (If Ps, then Q).
If ravens are black, and Arch is a raven, Arch is black.
B) Inductive: generalizing from observing a representative sample to an unobserved whole
We infer that all ravens are black from seeing seeing many black ravens
C) Abductive: inference to the best explanation
Best explanation for certain cosmological facts about motions of stars: dark matter
Best explanation for why the diamonds were found in John's safe and his fingerprints on them: he stole them.
No computational process that can implement inductive and abductive reasoning
2. Emotions: greatly 'impair' rational thinking (Crawford 2010, p. 103).
Either emotions are computational processes that CTM has left out, or they resist being captured by computation, and we need another explanation.
3. Imagination:
Can computer be creative? (Manipulation of 0-1)
Can creativity be understood computationally?
Alternative model: connectionism?
(Modeled on interconnected neural networks)
4. Mental representations (part of CTM): how do they get their meanings?
According to CTM, to believe that x ('Turing cracked the enigma code'), is o have a mental symbol in your head that means, or has the content that Turing cracked the enigma code.
But where does it get this meaning from? And how can the thought be directed at things that do not exist?
= Problem of intentionality(the power of minds to be about, to represent, or to stand for, things, properties and states of affairs): What determines what we do is what our mental states are about, but aboutness is not a category of natural science.

Syntax vs. Semantics

Searle (1984, p. 31): "There is more to having a mind than having formal or syntactical processes". We need semantics, or mental content.
Syntax: formal/grammatical structure; how we present information.
Semantics: meaning; what is the information about.
"Colourless green ideas sleep furiously

Chinese Room experiment

Searle's main premises and arguments

Premises:
P1. Brains cause minds
P2. Syntax is not sufficient for semantics
P3. Computer programs are entirely defined by their formal/syntactic structure
P4. Minds have mental (semantic) contents
Arguments:
P2+P3+P4 = C1: Computer programs by themselves are not sufficient for minds.
P1+C1 = C2: Brains cannot cause minds by running a computer program (brains are not computers)
P1 C3: Anything else that caused minds would have to have causal powers at least equivalent to those of the brain (be as good as the brain)
C1+C3 = C4: For any artefact we build which have mental states equivalent to human brain, computer program would not be sufficient (so it has to be like brain).

Arguments against Searle's Chinese room:

1. The Gestalt argument: whole is more than sum of its parts. The total system understands Chinese.
Searle: If I (the central processing unit) cannot know what the symbols mean, then the whole system cannot either (p. 34).
2. Interaction argument: If the robot moved around and interacted in the world, it would start to understand Chinese.
Searle: if the computer is all there is to the brain, same problem: no semantics without syntax.
Searle: "As long as all I have is the symbol with no knowledge of its causes or how it got there, I have no way of knowing what it means. (...)The causal interactions (...) are irrelevant unless [they] are represented in some mind or other" (p. 35).

Externalism

Semantic externalism: after having been baptized, reality determines whether a word has been used correctly or not . (This is what we call 'cat')
Externalism in the philosophy of mind: the content of thoughts is determined by the environment of the thinker.
Concerns intending, desiring, believing
The claim is that the character of such mental states does not supervene on the intrinsic properties of people
What follows: Perfect duplicates as regards intrinsic properties could be in different mental states.

Twin Earth

Brain in a vat

All experiencing is the result of electronic impulses traveling from the computer to the nerve endings
Epistemology: against skeptical argument
If you were a brain in a vat, all your thoughts would be false! Why?
Brains in a vat cannot refer to things outside them, only to their own images
"the brains in a vat are not thinking about real trees... because there is nothing by virtue of which their thought 'tree' represents actual trees" (Putnam 14).
The sensory inputs do not represent
Brains in a vat cannot even think of themselves as brains in a vat
'I am a brain in a vat' is self refuting:
If my brain is in a vat and my experiences are of matrix, then 'I am a brain in a vat' would mean 'I am a brain in a vat in the image' that the computer feeds ne.
"Part of the hypothesis that we are brains in a vat is that we aren't brains in a vat in the image" (15)
So, if I am a brain in a vat, saying 'I am a brain in a vat' is false, because I cannot refer to the vat world that is not simulated (and since the experience is only simulated, "I'm not a brain in a vat" is true).
For a brain in a vat that had only ever experienced the simulated world, the statement "I'm not a brain in a vat" is true. The only possible brains and vats it could be referring to are simulated, and it is true that it is not a simulated brain in a simulated vat. By the same argument, saying "I'm a brain in a vat" would be false, because you cannot refer to the vat world that is not simulated
Pictures in the Head
"For the image, if not accompanied by the ability to act in a certain way, is just a picture, and acting in accordance with a picture is itself an ability that one may or may not have" (Putnam, 1982, p. 19).
Having images alone is not enough to have understanding, you need to be able to use them in a context.
Maps and pictures alone do not say anything (do not mean or refer). You need to be able to read them.
Imaginings do not have conditions of
satisfaction; I can imagine anything I want and
it will always be 'true'
Recap
Preconditions for thinking about X, representing X, referring to X:
X actually exists (in physical world or social discourse)
Causal connection between the world and your thought
Rejection of the power of the mind to be about things out of nowhere ('intentionality' with the power to refer)
Our thoughts are still about something, they have meanings but the meanings are external

Eliminative materialism/ Eliminativism - definition

...is the thesis that our commonsense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience" (Churchland, p. 67).
"... is the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist." (SEP)

Folk Psychology

Is eliminativism a form of reductionism?

No! FP won't be reduced to neuroscience because it is wrong and will be replaced by neuroscience
Reductionism: All mental states reduce to the physical/neurological phenomena.
Eliminativism: There are no mental states (they do not exist), there are just neural states.
Compare:
identity theory - there will be a smooth reduction that preserves FP
dualism - FP is irreducible, since it deals with a non-physical domain
Functionalism - FP is irreducible, since psychology deals with an abstract set of relations among functional states that can be realized in different ways

Why is FP wrong?

FP is an empirical theory (can be true or false) and it happens to be false.
Its ontology (beliefs, desires) are illusion.
FP as a theory cannot explain many things:, such as:
mental illness
creative imagination
differences in intelligence
function of sleep
how we track moving objects,
3-D vision from 2-D retinal array,
perceptual illusions,
memory and retrieval,
learning, especially in pre- or non-linguistic organisms, such as babies and animals
"Degenerating research program"
It does not fit well with the rest of the sciences like evolutionary theory, biology, and neuroscience, which are part of a growing system of knowledge that includes chemistry and physics
FP is not applicable to any sort of cognition other than that of adult, language-using human beings (excludes children and animals). It's tightly linked to our ability to use language

Save FP! - functionalist arguments against elimination

1) FP is not an empirical theory (is not refutable by the facts). It is a normative theory:
it doesn't describe how people actually act, but characterises an ideal: how they ought to act if they were to act rationally on the basis of beliefs and desires
hence, FP could not be replaced by an description of what's going on at the neuronal level
What do you think?
Churchland:
FP explanations depend on logical relations among beliefs and desires, like mathematics; does not make FP a normative theory. The relations are objective, we add valued to them
People are not ideally rational
2) FP is an abstract theory
FP characterises internal states such as beliefs and desires in terms of a network of relationships to sensory inputs, behavioral outputs, and other mental states
This abstract network of relations could be realised in a variety of different kinds of physical systems
Hence, we cannot eliminate this functional characterisation in favour of some physical one
Churchland's rebuttal: This shifts the burden of proof
From FP being a good theory to trying to certain physical systems supporting FP
It's removed from empirical criticism
To defend eliminativism - attack functionalism
Churchland's attack: functionalism is like alchemy:
Alchemy explained the properties of matter in terms of four different sorts of spirits
e.g., the spirit of mercury explained the shininess of metals
This theory got eliminated by atoms and elements
Reduction not good enough: the old and the new theories gave different classifications of things
Functionalist rebuttal: can redefine spirits as functional states
e.g., being ensouled by mercury just is the disposition to reflect light
Churchland: if you can make that move, you can make any move that will be an outrage against truth and reason. The functionalist stratagem can be used as a smokescreen for error .
More worries with eliminationism:
1. Maybe not a theory at all?
Alternative: social practice
2. Could we talk in 'neural' language?
3. Is there a contradiction?
Reply to 3: it begs the question
It assumes that for something to have any meaning, it must express a belief. It is this theory of meaning that should be rejected.
Analogy: it would be like a 17th-century vitalist arguing that if someone denies they have a vital spirit, they must be dead, and hence not saying anything (Pat Churchland)

Instrumentalism (in PoM)

The view that propositional attitudes such as beliefs are not actually concepts on which we can base scientific investigations of the mind and brain, but that acting as if other beings do have beliefs is often a successful strategy.
The value of a position is is determined by usefulness, not truth
Closely related to Pragmatism (vs. Scientific Realism)
Levels of abstraction - making predictions based on:
Physical stance - physical constitution and laws
Design stance - purpose, design, function
Intentional stance - mentalistic explanations

Intentional stance

- method for attributing beliefs and desires
"Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in many - but not all - instances yield a decision about what the agent ought to do; that is what you predict the agent will do" (p. 325).
Dennett, D. (1981). "True Believers: The Intentional Strategy and Why it Works." Reprinted in Lycan, W. and Prinz, J. (2008). Mind and Cognition, An Anthology. Blackwell Publishing
How do we do it?
Rationality
Driven by the reasonable assumption that all humans are rational beings � who do have specific beliefs and desires and do act on the basis of those beliefs and desires in order to get what they want
Our beliefs are based on "perceptual capacities", "epistemic needs" and "biography", and are often true
Normativity
Based on our fixed personal views of what all humans ought to believe, desire and do, we predict (or explain) the beliefs, desires and actions of others "by calculating in a normative system"
The attributor works with beliefs and desires
You don't attribute to the computer that it has beliefs to predict how it will behave, but you figure out how it will behave on the basis of 'as if' beliefs

What is Extended Cognition?

The idea that mind exists not only in ourselves but is extended to the objects and technology we use
Active role of the environment in driving cognitive processes
Cognition/computation is distributed across brain, body and the environment
How is it different from Putnam's externalism?
Putnam
Content
Thinking is done inside the head, what's it about is on the outside
The world plays a passive role: it does not make a difference to action, it only says if the thought is true or false
C&C
Mind as a whole
Thinking is (sometimes) done outside of the head
The world plays an active role: close coupling; makes a difference to action. External features are causally relevant to action

Parity Principle

If (...) a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (...) part of the cognitive process."
Parity = consistency, equality, treating things alike

Otto's Notebook: Situation

Otto and Inga are looking for the MoMA
Inga consults her biological memory: she remembers that MoMA is on E 53rd str
= Inga believed that MoMA is on E 53rd str; not occurrent, but waiting to be accessed
Otto has Alzheimer's Disease. He consults his notebook. He goes to the museum.
= Otto didn't just learn that MoMA is on E 53rd str. Otto believed that MoMA is on E 53rd str even before consulting his notebook. (Not occurrent, waiting to be accessed)
C&C: Alternative is strange
"Otto has no belief about where MoMA is until he consults the notebook"
However,
Otto constantly uses his notebook (like memory)
The information about MoMA is reliably available; it does not 'disappear'
He automatically endorses it (without questioning it)
His notebook plays the same function as memory
= Parity. No reason not to see the notebook as part of the mind (believing/remembering process)
Argument for extended belief
The notebook plays the same role for Otto as memory for Inga
"Beliefs can be constituted partly by features of the environment, when those features play the right sort of role in driving cognitive processes"
Functionalist notion of belief: no reason the function must be played from inside the body
Beliefs as dispositions, not occurrences: there waiting to be accessed
Possible Objections:
Inga has more reliable access?
No (e.g., brain damage; drunk)
Otto's beliefs come and go when we puts away the notebook?
Yes if the notebook would be unavailable; must be available at relevant situations (just like memory)
Inga has direct access: knows her beliefs by introspection. Otto uses perception.
No: Otto and notebook directly coupled.
Doesn't matter: no impact of phenomenology on the status of belief
= Shallow differences

Thomas Nagel: The "What is it like?" Argument

Argument for insufficiency of reductionism: against limitations of our current concepts and theories for understanding human consciousness
Reductionism = trying to reduce A to B
(Reduction of the mental to the physical = Physicalism/Materialism)
"Any reductionist program has to be based on an analysis of what is to be reduced.
If the analysis leaves something out, the problem will be falsely posed - it is useless to pose the defense of materialism on any analysis of mental phenomena that fails to deal explicitly with their subjective character.
For there is no reason to suppose that a reduction which seems plausible when no attempt is made to account for consciouness can be extended to include consciousness" (p. 323).
Nagel: "fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism - something it is like for the organism" (p. 323).
Subjective character of experience
Example: Bat
Bats perceive through a sonar: echolocation
The sonar is a different 'sense' medium; no reason that it is subjectively like anything we can experience or imagine (p. 324).
Starting point: Realism about experiences: there are experiential/phenomenological facts
Phenomenological facts are both
objective (what the quality of the experience is) and
subjective (what the quality of experience is like from the point of view of the experiencing subject)
Subjectivity of these facts is crucial aspect, or real nature, of the experience.
As objective facts, they could be accessed by others
As subjective facts, they can only be accessed by someone 'like me'/sufficiently similar to adopt that person's point of view (p. 325)
If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objectivity - that is, less attachment to a specific viewpoint - does not take us nearer to the real nature of the phenomenon, it takes us farther away from it." (p. 327).
Scientific language - 3rd person, objective, birds-eye view - will then take us farther away from the experience.
Therefore, reductionism fails.
Conclusion: inadequacy of physicalist hypotheses.
Does it mean physicalism is false?
No: it follows that physicalism is a position we cannot understand (p. 328).

The Knowledge Argument

Argument against physicalism altogether
Jackson's position:
"I think that there are certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of physical information includes. Tell me everything physical there is to tell about what is going on in a living brain, (...) and be I as clever as can be in fitting it all together, you won't have told me about the hurtfulness of pains, the itchiness of itches, pangs of jealousy, or about the characteristic experience of tasting a lemon, smelling a rose, hearing a loud noise or seeing the sky" (p. 127).
Fred has better colour vision
He can see a different shade of red. Not all ripe tomatoes look the same to him, though they look the same to us.
He sees two colours: red 1 and red 2. They are as different to each other as yellow and blue.
Jackson asks: what kind of experience does Fred have?
Physical information will not tell us:
His cones respond differently to certain light waves
He has a wider range of brain waves responsible for discriminatory behaviour
Knowing all this is not knowing everything about Fred

Mary is seeing red

Mary lives in a black-and-white room.
She is a scientist who knows everything there is to know about the science of colour (all the physics, chemistry, neurophysiology, causal and relational facts), but she has never experienced colour.
Jackson asks: once she experiences colour, does she learn anything new?
The Argument:
What follows:
To have complete physical knowledge is to have complete knowledge.
KA: Mary does learn something new!
Conclusion: There was some knowledge about colour Mary did not have prior to her release. Therefore, not all knowledge is physical knowledge, so physicalism is false.
(If physicalism were true, the information about experience would already be in our possession. No imagining would be needed.)
Implication:
If Mary does learn something new, it shows that qualia exist: there is a property of the experience that was left out.
The world is not just made of physical things.
Back to dualism?

Epiphenomenalism

Qualia are epiphenomenal: secondary phenomena, by-products
"They do nothing, they explain nothing, they serve merely to soothe the intuitions of dualists, and it is left a total mystery how they fit into the worldview of science" (p. 135).
Two arguments:
Causally inefficacious/impotent with respect to the physical world
Evolutionarily useless
Epiphenomenal = supervenience relation
Supervenient: determined by/dependent on the properties they supervene upon
Epiphenomenal: grounded on some underlying causal processes

The Easy Problem (psychological consciousness)

Directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms.
The easy problems of consciousness include those of explaining the following phenomena:
the ability to discriminate, categorize, and react to environmental stimuli;
the integration of information by a cognitive system;
the reportability of mental states;
the ability of a system to access its own internal states;
the focus of attention;
the deliberate control of behaviour;
the difference between wakefulness and sleep.
There is no real issue about whether these phenomena can be explained scientifically.

The Hard Problem (phenomenal consciousness)

The problem of experience.
Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C?
How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?
Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
(Chalmers, 2006, Facing up to the problem of consciousness).
In short:
How are organisms subjects of experience?
Why do we experience sensations as we do?
Why and how does physical processing give rise to our rich inner life?
The problem of consciousness, simply put, is that we cannot understand how a brain, qua gray, granular lump of biological matter, could be the seat of human consciousness, the source or ground of our rich and varied phenomenological lives. How could that 'lump' be conscious - or, conversely, how could I, as conscious being, be that lump?" (Akins 1993)

Phenomenal vs. Access Consciousness (Ned Block)

Phenomenal (P-) Consciousness: Cannot define, can only point to it:
Qualia, raw feels, 'What it is to be like', Whatever is experienced; e.g., sensations, feelings, perceptions, thoughts, wants, emotions
Access (A-) Consciousness: All items of access consciousness are representational. A state is A-Conscious when its content is:
informationally promiscuous (available to other parts of the brain for use in reasoning),
poised for rational control of action,
reportable; e.g., perception, sensation, etc. as information that can be used in modifying behaviour.
P-Consciousness contains qualia/experience; A-Consciousness contains information that it can control
Can A-Consciousness and P-consciousness come apart?
Yes: Thought experiment of philosophical "zombies" (shortly)
Yes: Real cases
A-Consciousness without P-Consciousness
Blindsight: Patients claim to be blind: they perceive no visual images, yet they respond successfully (unimpaired functioning)
P-Consciousness without A-Consciousness
Mental processing of background noise, e.g., noise of the pneumatic drill outside your window. You don't notice it until you become aware of the drill and realize that you have been hearing it for a long time.
A-conscious without P-conscious:
If you are A-conscious but not P-conscious, you can use information for rational thought, but you don't experience knowledge of this information. Does this differ from unconscious information processing?
P-conscious without A-conscious:
You are 'aware' but not consciously aware - is it a contradiction?
Case: Sleepwalking
Sleepwalkers have their eyes open and use vision to navigate the world. Visual information is poised for use in action. Sleepwalkers can eat, drink, even drive a car. But if you speak to them, they are slow or unresponsive and seemingly unaware of what they are doing.
Are they A-conscious? P-Conscious? Is there anything it is like to sleepwalk?

What is a Philosophical Zombie?

The Inverted Spectrum

The idea of the inverted spectrum goes back to John Locke. It is the idea of a person whose colour-experience is systematically inverted.
Imagine a situation in which we wake up one day and � without any physical change having occurred in the world or in our brain � we suddenly perceive colours in a different way: what used to be red now gives the sensation formerly known as green (and vice versa)
Consider the case of synesthesia
How can we explain this without positing qualia?
Intuitions from Inverted Spectrum and Synesthesia:
Against behaviourism - two creatures who are behaviourally alike will still respond to what they see in the same way, but see colours in a different way
Against physicalism - two creatures who are physically constituted the same way can see colours in a different way
Against functionalism - two creatures who are functionally organised in the same way can see the colours in a different way
Against representationalism - synesthesia shows that phenomenal character does not always depend on representational content because mental states can be the same representationally, but differ when it comes to experiential character
But beware of trouble with qualia, e.g., ...
If (colour) qualia are not physical, why do we need physical objects, physical eyes, physical neurons to see colour?

Dennett: Characteristics of Qualia

Four characteristics of Qualia:
Ineffable: Can't describe them
Intrinsic: Don't depend on anything else
(Intrinsic property = a property that an object or a thing has of itself, independently of other things, including its context. Pertaining to its essence).
Private: Known only from first-person point of view
Immediately apprehensible: Known without judgment or reflection

Intuition Pumps

Intuition pump 7+8: Chase and Sanborn, the coffee-tasters

Chase and Sanborn have been tasting coffee for many years, but they both don't like it anymore.
Difference?
For Chase, coffee tastes the same. So Chase bases his distaste for coffee on a change in his tastes.
For Sanborn, coffee tastes different. So Sanborn claims that his distaste comes from some change of his tasters.
Chase thinks the taste of the coffee is just the same as always - he is getting the same quale - but he doesn't enjoy that taste, that quale, anymore.
Sanborn, by contrast, thinks the taste (the quale) itself has changed: where he used to get the taste of good coffee, now he is getting another, different quale, one that he doesn't enjoy as much.
Who are we to believe?
Dennett questions if there is any more value to claiming that the taste has remained constant (but that the cause of Chase's distaste is his experience with better coffees), or that the cause of their distastes is caused by some modification in their anatomy (Sanborn), making him taste the original coffee differently.
= There is no meaningful difference to what Chase or Sundholm say. They don't even know criticism of infallible qualia and introspectionism.

Dennett's argument against intrinsic and accessible qualia

Dennett denies that qualia can be both intrinsic/non-relational (2) and directly knowable (4)at the same time.
non-relational: they do not play a part in the kinds of causal relations analysed by physicalists
directly knowable: known non-inferentially from the first-person perspective
Exponents of qualia must claim either
(i) that qualia influence behaviour independently of our beliefs, or
(ii) that qualia influence behaviour in conjunction with our beliefs.
If qualia affect behaviour independently of our beliefs (i), then qualia have to be relational
If qualia directly influence behaviour, then they must play a direct causal role in behaviour (Red quale ? "I see red")
If qualia affect our behaviour in conjunction with beliefs, then qualia cannot be directly knowable.
If qualia only influence behaviour when they are conjoined to other beliefs (Red quale + I believe what I am seeing is red ? "I see red"), then one cannot know one's qualia without knowing those other beliefs. Qualia can only be known when the surrounding circumstances are known.
Therefore, qualia cannot be both non-relational and directly knowable.
Dennett's approach to qualia
Verificationist argument: you cannot know the correctness of qualia.
Conscious experience has no properties that are special in any of the ways qualia have been supposed to be special
Theoretical foundation for believing qualia to be of extra explanatory value is fundamentally flawed
More radical than Wittgenstein: "Qualia are not even 'something about which nothing can be said'; 'qualia' is a philosophers' term that which fosters nothing but confusion" (p. 386).
If Dennett is right and it is impossible to tell the difference between Chase and Sanborn, then there is no need to postulate "qualia" to explain the taste-judgments we make.
There are just these judgments themselves, but we can explain these fully in terms of physical and functional facts that are perfectly accessible from a third-person, objective point of view.
"They are not qualia, for the simple reason that one's epistemic relation to them is exactly the same as one's epistemic relation to such external, but readily -if fallibly - detectable properties as room temperature or weight" (p. 396).
Thus there is no special "hard" problem of explaining consciousness. It'ss just a matter of time until we have a good physicalist explanation of how consciousness works.
What properties remain to be studied? Dispositional properties.

Theory Theory

We understand other minds thanks to possessing and making use of a 'Theory of Mind'
(It's a theory about using a 'Theory', hence TT)
A 'Theory of Mind' is a set of principles outlining very general psychological laws - laws which tell their users how mental states are appropriately combined in the bringing about of actions (Lewis, 1972).
Our tacit grasp of these laws tells us how mental states must relate in the bringing about of actions.
Core 'Theory of Mind' = Schema of general principles involving key mentalistic concepts:
"What people see, they generally believe it"; "If people want something, and believe there is something they can do to get it, they will generally do it (ceteris paribus)."
If X desires that q and believes that p (i.e. X's ing will lead to q); then X will (Ceteris Paribus).
These must be augmented with additional empirical generalisations
E.g. In circumstances, C, a rational agent will tend to desire q and believe p, etc.

Modular Theory Theory

Your Theory of Mind forms a specific functional module of your mind
Imagining, Remembering, Theory of Mind
Modules = domain-specific, special purpose 'cognitive mechanisms'
Characteristics of modules:
Informationally encapsulated (i.e. only receptive to certain kinds of inputs);
Mandatory, high speed deliveries;
Cognitively impenetrable;
Have fixed neural architecture
Nativism of TT: As a mental faculty, it is innate
Why believe in ToMM (Theory of Mind modules)
"The idea that learning a theory of mind would be enormously difficult is close to received wisdom" (Sterelny, 2003, p. 214).
The acquisition of folk psychology "poses the same degree of a learnability problem as does the rapid acquisition of linguistic skills, which appears to be similarly rapid, universal and without sufficient stimulus from the environment" (Mithen 2000b, p. 490, emphasis added, Carruthers 2003, p. 71, cf. Botterill & Carruthers 1999, p. 52-3).
Chomsky: Poverty of Stimulus argument: The learning mechanisms are too weak to derive the kind of knowledge we have from the kinds of information we get from the outside world
(so the knowledge must be innate - Plato)
Evolutionary Genesis:
Darwinian genesis: Perhaps modules were forged by natural selection. Perhaps they are mother nature's response to the need to solve specific adaptive problems.
The common assumption is that ToMMs "emerged in hominid evolution as an adaptive response to an increasingly complex social environment" (Brothers, 1990).
TT is an ancient cognitive endowment
Fit with developmental data?
Children become progressively more sophisticated in their understanding of why others act as they get older.
Many regard this as evidence that their concepts of mind are developing during ontogeny.
Example: Children of age four pass the Sally-Anne Test

Scientific Theory Theory

The basic idea is that children develop their everyday knowledge of the world by using the same cognitive devices that adults use in science. In particular, children develop abstract, coherent, systems of entities and rules, particularly causal entities and rules. That is, they develop theories. These theories enable children to make predictions about new evidence, to interpret evidence, and to explain evidence. Children actively experiment with the world, testing the predictions of the theory and gathering relevant evidence" (Gopnik, 2003)
Similarity with Modular TT: children theorize to make sense of the world (and others)
Difference with Modular TT: FP is not innate, but learned. FP is a product of social intruction.
Children gradually acquire the theories through scientific method: observation and testing, revising the theory
Gopnik: "The same mechanisms used by scientists to develop scientific theories are used by children to develop causal models of their environment"
Example: Children's understanding of objects and object appearances is highly theoretical: making inference rules, such as 'if the object is occluded, then it appears invisible'. Same for understanding other minds: if he cries, then he is sad.
Modular Theory Theorist's response to developmental data:
Nativists: young children already must have the concept of belief, because it cannot be learned. So we interpret the evidence differently.
Core ToM is already in place early on, only children's performance develops (Fodor 1992/1995)
"The child's theory of mind undergoes no alteration; what changes is only his ability to exploit what he knows" (Fodor 1995, p. 110).
"the 3-year old does indeed have a metarepresentational notion of BELIEF which is simply obscured by performance limitations" (Scholl & Leslie 1999, p. 147).
The core knowledge base is there. It only matures. How else to explain that children achieve the same mindreading abilities at much the same age? (Botterill & Carruthers, 1991, p. 80-81)

Simulation Theory

We understand reasons for action and ascribe mental states not by theorizing about the other, but by replicating a target's thoughts/feelings in ourselves imaginatively.
Understanding minds essentially involves modeling those minds by making use of our own mind.
Introspective Modeling: using ourselves as model for the other' - i.e. 'we put ourselves in their shoes'
Offline Practical Reasoning: we put our practical reasoning mechanism to a new use. We can get at another's reason for action (to understand/ predict/explain that action) by
feeding our own practical reasoning mechanism pretend beliefs and desires as 'inputs';
using the resulting 'output' to prediction or explanation of other's action rather than to produce an action of our own
The "great virtue of a simulation approach to understanding others is that, unlike the familiar versions of TT, it can explain how mirroring processes may directly influence our efforts to anticipate and to understand another's behavior; that is, without first issuing in judgments about the other's mental states, without even requiring possession of a repertoire of mental state classifications" (Gordon 2008, p. 221).
Stage 1: Selection of targets (e.g. activation of the 'mindreading device' or intentionality detectors);
Stage 2: Running the simulation (e.g. off-line practical reasoning);
Stage 3: Final or 'attribution' stage - involving the ascription of mental state concepts.
Belief-desire machinery (the 'practical reasoning mechanism')
Assuming we are alike ('Like-me' hypothesis)
This makes simulation process-driven, not theory-driven: sequences of mental states are driven by the same cognitive process, not implicit body of theoretical knowledge.
ST "imputes to the attributor no knowledge of psychological laws ... [not even] any want-belief-decision generalization" (Goldman 2006, p. 19).
The possession and use of the central FP principles is unnecessary because such work can be done by directly manipulating our own mental states and exercising structured practical reasoning abilities
ST posits one and the same sub-personal mechanism in order to explain:
how we deliberate and generate intentions to act;
how we consider possible actions in counterfactual situations;
how we manage to predict and explain the actions of others.
ST assumes that practical reasoning-mechanisms are already in place and are ancient enough to allow for simulation to be an inherited capacity.

Narrative Practice Hypothesis

Engaging in socially supported story-telling activities is the normal route for developing our FP competence.
Stories provide the crucial training set needed for understanding reasons.
"A special kind of narrative is used - 'folk psychological' narratives. They show how mental states figure in the lives, history and larger projects of their owners. As complex linguistic representations folk psychological narratives are objects of joint attention."
FP is not a theory to begin with, it's a special practice
Requires full-blown language
What does it take to be a folk psychologist? 1. A practical understanding of propositional attitudes; 2. A capacity to represent the objects of propositional attitudes through that-clauses, etc. (p. 23).
According to NPH, the 'core principles' of FP are revealed to children not as a series of rules but by showing them in action, through narratives.
Two sense of the word narrative:
1) third-person objects of focus (stories)
2) second-person acts of narration (storytelling)
Both stories and storytelling are the best means of revealing how propositional attitudes work.

View from Phenomenology

Four suppositions challenged by the Phenomenological Account:
(1) Hidden minds: The problem of social cognition is due to the lack of access that we have to the other person's mental states. Since we cannot directly perceive the other's beliefs, desires, feelings, or intentions, we need some extra- perceptual cognitive process (mindreading or mentalizing by way of theoretical inferences or simulation routines) that will allow us to infer what they are.
(2) Mindreading: These mentalizing processes constitute our primary, pervasive, or default way of understanding others.
(3) Observational stance: Our normal everyday stance toward the other person is a third-person, observational stance. We observe their behaviors in order to explain and predict their actions, or to theorize or simulate their mental states.
(4) Methodological individualism: understanding others depends primarily on cognitive capabilities or mechanisms located in an individual subject, or on processes that take place inside an individual brain.

Interaction Theory (Gallagher)

An account of basic forms of intersubjectivity that emphasizes embodied face- to-face interaction in pragmatic and social contexts"
TT - detached/based on observation- observing the other and inferring their mental states
ST - detached/based on observation - simulating what one sees
IT - directly knowing the other by interacting with them
IT: Three components:
Primary intersubjectivity
Makes its appearance early in infancy; from birth the infant is pulled into these interactive processes (gaze following, voice attunement)
It includes basic sensory-motor capacities that motivate a complex interaction between the child and others.
Secondary intersubjectivity
Begins to develop around first year of age.
It is based on the development of joint attention, and motivates contextual engagement, and acting with others.
Narrative competency
Begins to develop around 2-4 years.
It involves narrative practices that capture intersubjective interactions, motives, and reasons. Narratives elaborate on the interactive capacities.
Result of the phenomenological approach:
Primary and secondary intersubjectivity bring us into immediate relations with others and allow us to interact wit them.
We see in the other person's bodily movements, gestures, facial expressions, eye direction, vocal intonation, etc. what they intend and what they feel, and we respond with our own bodily movements, gestures, facial expressions, gaze, etc.
= No theorising or simulating needed

John McDowell on perception

All perception is conceptual. It is judgmental.
Perception is an ability. This ability is already conceptual, even though it is unreflective.
McDowell (1996) sees the normal mature human being as a rational animal.
"Having the concept requires the ability to take something's falling under it into account in reasoning."
The acquisition of conceptual capacities is dependent upon language-acquisition (McDowell, 1996, pp. 125-26). Pre-linguistic children do not possess concepts in McDowell's strong sense.
Once acquired, conceptual capacities belong to a linguistic or reflective faculty (McDowell, 1996, p. 49).

Inspiration: Kant

Thoughts without content are empty, intuitions without concepts are blind"
Reconciling rationalism and empiricism: no knowledge without sense data, and no sensory experiences without reason (concepts) to order those experiences.
Interested in phenomena: how things appear to us (vs. noumena: how things are in reality). Human knowledge deals exclusively with the objects of experience.

Case: animal in danger

The animal does not know why it is fleeing, does not act for reason: 'danger'. It has not concept of 'danger'. Do you agree?
To be aware of danger, one must possess a concept of danger.
To respond to danger, the animal need not be rational! Yet, to have a concept of danger the animal needs a capacity that belongs to the responsiveness of reason.
Responsiveness to danger is manifested in animal's capacity to flee. It does not license the animal with being able to be 'aware' of danger
Responding to reasons = awareness of what one is responding to
Rationality in perception:
Perceptually based belief is linked to experience that depends on rationality.
Rationality = responsiveness to reasons as such
The animal might therefore be responding (fleeing) for a reason: danger, but they are not responding to reason as such: because "it is dangerous", because of having a concept: 'danger'. Hence, the response is not rational.
When we (humans) perceive actions, we are using rationality.

Alva No�: Perception as Action

What we experience visually outstrips what we actually see
Perception as action (1)
How do we bring the detail into view?
"To bring detail into consciousness, it is necessary to probe the environment, by turning your eyes, and your head, by shifting your attention from here to there."
Perceiving is a way of acting
Perception as action (2)
We do not represent all the detail, but we have access to the detail in practical knowledge
"My sense of the presence of the whole cat behind the fence consists precisely in my knowledge, my implicit understanding, that by a movement of the eye or the head or the body I can bring bits of the cat into view that are now hidden."
"The presence of the tomato to me as a voluminous whole consists in my knowledge of the sensory effects of my movements in relation to the tomato."
Conclusion on No�:
Experience "doesn't break down into experiences of atomic elements". Experience is always of a whole.
"There is a sense in which the content of experience is not in the head. But nor is it in the world. Experience isn't something that happens to us. It is something we do.

The problem of perceptual presence

When you see a tomato, you can't see its back. Yet, you do see it as a whole. The object is perceived in its entirety.
The problem of perceptual presence: how can we have perceptual awareness of wholes without adequate sensory data?
Tomato as whole, plate as circular, cat as one
vs. Myth of the given
Wilfred Sellars (1956) 'Empiricism and the Philosophy of Mind'
The myth of the given: the idea that some of our terms or concepts derive their meaning directly from confrontation with a particular (kind of) object or experience. Kantian: there must be some knowledge to organise sense-data.
No�'s tomato (cont.)
The puzzle is that it seems to us at once as if we only see a part of the tomato and as if the whole is perceptually present."
Perhaps we just infer from our knowledge of tomatoes that they must be whole?
No�: concepts 'cognitively filling in' what is given to us does not solve the problem of perceptual presence.
How do you explain the sense of the tomato as a whole?
= How can we perceive anything without concepts?
(One needs reasoning/inferential/conceptual capacities)

Presence in absence

Knowing does not change the way things look
Presence in absence: we know the triangle is not there, but it's absence is seen, visually present to us. The way we take it to be perceptually present is not cognitive, but visual.
Explaining presence in absence:
Palmer (1999):
The brain fills in to make up for the gap or discontinuity at the retinal blind spot.
Neural process fills in a higher-level neural representation that bridges the gap between low-level retinal input and experience
Dennett (1991):
Not entitled to make this jump! (different levels of explanation)
Proposal: the brain ignores the absence of information, that's why the line looks unbroken.
No�:
Both take it for granted that it visually seems to us as if the line is filled in.
There is a difference between not perceiving a break (seeing a straight line) vs. seeing it as 'filled', or seeing an absence of a break.
Proposal: it is visually present, but not 'as seen'.

Michael Tye

Animals do not have linguistic concepts or HOT contents
Some animals have consciousness
Paramecia, caterpillars do not
Fish and honey bees do!
We can know that they do; no problem of 'simple minds'
(We cannot know what it is like for them; we have different sensory representations)
Tye on Consciousness
Claims about animal minds follows his independent theory of consciousness
Two kinds of consciousness: higher-order (requires attending to itself) and lower-order (awareness, feel, what-it-is-likeness)
Lower order = phenomenal consciousness
It seems to be relatively primitive, largely automatic, yet it can be problematic (inverted and absent qualia)
Phenomenal consciousness consists of intentional content that is non-conceptual

The 'PANIC' Theory

Poised (balanced)
Abstract
Nonconceptual
Intentional Content
Perception has this content
Phenomenal character is (=) this content
Phenomenal character of experiences is already introspectively accessible and representational
"...your perceptual experiences have no introspectible features over and above those implicated in their representational contents" (297)
- between the conceptual and the nonconceptual, supply inputs on which we can have beliefs 'if attention is properly focused'. Ready and available to make a difference in beliefs.
Example: pain
No need for concept 'pain'
"Pains ... are experiences that represent changes in the body much as visual experiences represent changes in the external environment" (p. 298) (= mental tracking)
Bodily sensation = the physical changes are registered in sensory receptors, and a complex bodily representation is built up in largely mechanical fashion of how the body has changed (p. 299). E.g.,
Feeling of thirst represents dryness in mouth and throat,
Hunger pangs represent contractions in the stomach walls,
Tickles represent something lightly touching my skin,
Feeling hot represents elevated bodily temperature,
Pains represent tissue damage

Which animals feel? Criteria

Capable of changing behaviour in light of assessments they make, based on sensory stimulation
Must be flexible, modifiable by learning from experience, from trial and error
Goal-directed behaviour, purpose
Not only responding to stimulus; exploring
Making cognitive classifications or assessments
"Inner maps by which we steer"
Animals have perceptual beliefs:
These are "like inner maps by which the creature steers. They function as guides to behaviour and do so because of the information (or misinformation) they convey to the creature about what is out there in the environment (...) fish form simple beliefs on the basis of immediate, sensory representations of their environments" (p. 306).
Bee dance:
" Their dance requires them to remember how the sun moves relative to the positions on the landmarks, so that they can communicate the position of the cavity correctly." (...) This demands that they form some sort of cognitive map involving the landmarks" (p. 307).
Critical approach:
Is making a cognitive map necessary? (Maps don't 'say' anything on their own)
Are the other bees 'reading' the 'signs' that the dancing bees convey in their dances?
Either HOT content is involved, or something else is going on entirely.
Alternative: remembering location via external cues (memory in the world); recognition really just a form of 'bee' social routine/practice

Tye: conclusion

Animal consciousness: like our or unlike ours?
Unlike ours: no cognitive awareness of their sensory states; they do not bring their own experiences under concepts like we can
Like ours when we are not aware of it: "they function perpetually in a state like that of the distracted driver who is lost in thought for several miles as he drives along (...) he certainly sees the road and other cars. But he is not aware of his visual sensations" (p. 310).
Discussion:
Can representations (e.g., pain represents tissue damage) give you pain sensation for free? Is there really no distinction?
Fish do better at finding their way through mazes in groups (Tye, p. 305). Are groups/being with others accounted for in PANIC theory? Is something extra than representation needed?
Cannot plants and insects learn anything new? How do we judge their flexibility?