Artificial Intelligence Is Lost in the Woods
http://www.technologyreview.com/read_article.aspx?id=18867&ch=specialsections&sc=futurebiz&pg=1
Artificial intelligence has been obsessed with several questions from the
start: Can we build a mind out of software? If not, why not? If so, what kind
of mind are we talking about? A conscious mind? Or an unconscious
intelligence that seems to think but experiences nothing and has no inner
mental life? These questions are central to our view of computers and how far
they can go, of computation and its ultimate meaning--and of the mind and how
it works.
They are deep questions with practical implications. AI researchers have long
maintained that the mind provides good guidance as we approach subtle,
tricky, or deep computing problems. Software today can cope with only a
smattering of the information-processing problems that our minds handle
routinely--when we recognize faces or pick elements out of large groups based
on visual cues, use common sense, understand the nuances of natural language,
or recognize what makes a musical cadence final or a joke funny or one movie
better than another. AI offers to figure out how thought works and to make
that knowledge available to software designers.
It even offers to deepen our understanding of the mind itself. Questions
about software and the mind are central to cognitive science and philosophy.
Few problems are more far-reaching or have more implications for our
fundamental view of ourselves.
The current debate centers on what I'll call a "simulated conscious mind"
versus a "simulated unconscious intelligence." We hope to learn whether
computers make it possible to achieve one, both, or neither.
I believe it is hugely unlikely, though not impossible, that a conscious mind
will ever be built out of software. Even if it could be, the result (I will
argue) would be fairly useless in itself. But an unconscious simulated
intelligence certainly could be built out of software--and might be useful.
Unfortunately, AI, cognitive science, and philosophy of mind are nowhere near
knowing how to build one. They are missing the most important fact about
thought: the "cognitive continuum" that connects the seemingly unconnected
puzzle pieces of thinking (for example analytical thought, common sense,
analogical thought, free association, creativity, hallucination). The
cognitive continuum explains how all these reflect different values of one
quantity or parameter that I will call "mental focus" or
"concentration"--which changes over the course of a day and a lifetime.
Without this cognitive continuum, AI has no comprehensive view of thought: it
tends to ignore some thought modes (such as free association and dreaming),
is uncertain how to integrate emotion and thought, and has made strikingly
little progress in understanding analogies--which seem to underlie creativity.
My case for the near-impossibility of conscious software minds resembles what
others have said. But these are minority views. Most AI researchers and
philosophers believe that conscious software minds are just around the
corner. To use the standard term, most are "cognitivists." Only a few are
"anticognitivists." I am one. In fact, I believe that the cognitivists are
even wronger than their opponents usually say.
But my goal is not to suggest that AI is a failure. It has merely developed a
temporary blind spot. My fellow anticognitivists have knocked down
cognitivism but have done little to replace it with new ideas. They've showed
us what we can't achieve (conscious software intelligence) but not how we can
create something less dramatic but nonetheless highly valuable: unconscious
software intelligence. Once AI has refocused its efforts on the mechanisms
(or algorithms) of thought, it is bound to move forward again.
Until then, AI is lost in the woods.
What Is Consciousness?
In conscious thinking, you experience your thoughts. Often they are
accompanied by emotions or by imagined or remembered images or other
sensations. A machine with a conscious (simulated) mind can feel wonderful on
the first fine day of spring and grow depressed as winter sets in. A machine
that is capable only of unconscious intelligence "reads" its thoughts as if
they were on cue cards. One card might say, "There's a beautiful rose in
front of you; it smells sweet." If someone then asks this machine, "Seen any
good roses lately?" it can answer, "Yes, there's a fine specimen right in
front of me." But it has no sensation of beauty or color or fragrance. It has
no experiences to back up the currency of its words. It has no inner mental
life and therefore no "I," no sense of self.
But if an artificial mind can perform intellectually just like a human, does
consciousness matter? Is there any practical, perceptible advantage to
simulating a conscious mind?
Yes.
An unconscious entity feels nothing, by definition. Suppose we ask such an
entity some questions, and its software returns correct answers.
"Ever felt friendship?" The machine says, "No."
"Love?" "No." "Hatred?" "No." "Bliss?" "No."
"Ever felt hungry or thirsty?" "Itchy, sweaty, - tickled, excited, conscience
stricken?"
"Ever mourned?" "Ever rejoiced?"
No, no, no, no.
In theory, a conscious software mind might answer "yes" to all these
questions; it would be conscious in the same sense you are (although its
access to experience might be very different, and strictly limited).
So what's the difference between a conscious and an unconscious software
intelligence? The potential human presence that might exist in the simulated
conscious mind but could never exist in the unconscious one.
You could never communicate with an unconscious intelligence as you do with a
human--or trust or rely on it. You would have no grounds for treating it as a
being toward which you have moral duties rather than as a tool to be used as
you like.
But would a simulated human presence have practical value? Try asking lonely
people--and all the young, old, sick, hurt, and unhappy people who get far
less attention than they need. A made-to-order human presence, even though
artificial, might be a godsend.
AI (I believe) won't ever produce one. But it can still lead the way to great
advances in computing. An unconscious intelligence might be powerful. Alan -
Turing, the great English mathematician who founded AI, seemed to believe
(sometimes) that consciousness was not central to thought, simulated or
otherwise.
He discussed consciousness in the celebrated 1950 paper in which he proposed
what is now called the "Turing test." The test is meant to determine whether
a computer is "intelligent," or "can think"--terms Turing used
interchangeably. If a human "interrogator" types questions, on any topic
whatever, that are sent to a computer in a back room, and the computer sends
back answers that are indistinguishable from a human being's, then we have
achieved AI, and our computer is "intelligent": it "can think."
Does artificial intelligence require (or imply the existence of) artificial
consciousness? Turing was cagey on these questions. But he did write,
I do not wish to give the impression that I think there is no mystery about
consciousness. There is, for instance, something of a paradox connected with
any attempt to localise it. But I do not think these mysteries necessarily
need to be solved before we can answer the question with which we are
concerned in this paper.
That is, can we build intelligent (or thinking) computers, and how can we
tell if we have succeeded? - Turing seemed to assert that we can leave
consciousness aside for the moment while we attack simulated thought.
But AI has grown more ambitious since then. Today, a substantial number of
researchers believe one day we will build conscious software minds. This
group includes such prominent thinkers as the inventor and computer scientist
Ray Kurzweil. In the fall of 2006, Kurzweil and I argued the point at MIT, in
a debate sponsored by the John Templeton Foundation. This piece builds, in
part, on the case I made there.
A Digital Mind
The goal of cognitivist thinkers is to build an artificial mind out of
software running on a digital computer.
Why does AI focus on digital computers exclusively, ignoring other
technologies? For one reason, because computers seemed from the first like
"artificial brains," and the first AI programs of the 1950s--the "Logic
Theorist," the "Geometry Theorem-Proving Machine"--seemed at their best to be
thinking. Also, computers are the characteristic technology of the age. It is
only natural to ask how far we can push them.
Then there's a more fundamental reason why AI cares specifically about
digital computers: computation underlies today's most widely accepted view of
mind. (The leading technology of the day is often pressed into service as a
source of ideas.)
The ideas of the philosopher Jerry Fodor make him neither strictly
cognitivist nor anticognitivist. In The Mind Doesn't Work That Way (2000), he
discusses what he calls the "New Synthesis"--a broadly accepted view of the
mind that places AI and cognitivism against a biological and Darwinian
backdrop. "The key idea of New Synthesis psychology," writes Fodor, "is that
cognitive processes are computational. ... A computation, according to this
understanding, is a formal operation on syntactically structured
representations." That is, thought processes depend on the form, not the
meaning, of the items they work on.
In other words, the mind is like a factory machine in a 1940s cartoon, which
might grab a metal plate and drill two holes in it, flip it over and drill
three more, flip it sideways and glue on a label, spin it around five times,
and shoot it onto a stack. The machine doesn't "know" what it's doing.
Neither does the mind.
Likewise computers. A computer can add numbers but has no idea what "add"
means, what a "number" is, or what "arithmetic" is for. Its actions are based
on shapes, not meanings. According to the New Synthesis, writes Fodor, "the
mind is a computer."
But if so, then a computer can be a mind, can be a conscious mind--if we
supply the right software. Here's where the trouble starts. Consciousness is
necessarily subjective: you alone are aware of the sights, sounds, feels,
smells, and tastes that flash past "inside your head." This subjectivity of
mind has an important consequence: there is no objective way to tell whether
some entity is conscious. We can only guess, not test.
Granted, we know our fellow humans are conscious; but how? Not by testing
them! You know the person next to you is conscious because he is human.
You're human, and you're conscious--which moreover seems fundamental to your
humanness. Since your neighbor is also human, he must be conscious too.
So how will we know whether a computer running fancy AI software is
conscious? Only by trying to imagine what it's like to be that computer; we
must try to see inside its head.
Which is clearly impossible. For one thing, it doesn't have a head. But a
thought experiment may give us a useful way to address the problem. The
"Chinese Room" argument, proposed in 1980 by John Searle, a philosophy
professor at the University of California, Berkeley, is intended to show that
no computer running software could possibly manifest understanding or be
conscious. It has been controversial since it first appeared. I believe that
Searle's argument is absolutely right--though more elaborate and oblique than
necessary.
Searle asks us to imagine a program that can pass a Chinese Turing test--and
is accordingly fluent in Chinese. Now, someone who knows English but no
Chinese, such as Searle himself, is shut up in a room. He takes the
Chinese-understanding software with him; he can execute it by hand, if he
likes.
Imagine "conversing" with this room by sliding questions under the door; the
room returns written answers. It seems equally fluent in English and Chinese.
But actually, there is no understanding of Chinese inside the room. Searle
handles English questions by relying on his knowledge of English, but to deal
with Chinese, he executes an elaborate set of simple instructions
mechanically. We conclude that to behave as if you understand Chinese doesn't
mean you do.
But we don't need complex thought experiments to conclude that a conscious
computer is ridiculously unlikely. We just need to tackle this question: What
is it like to be a computer running a complex AI program?
Well, what does a computer do? It executes "machine instructions"--low-level
operations like arithmetic (add two numbers), comparisons (which number is
larger?), "branches" (if an addition yields zero, continue at instruction
200), data movement (transfer a number from one place to another in memory),
and so on. Everything computers accomplish is built out of these primitive
instructions.
So what is it like to be a computer running a complex AI program? Exactly
like being a computer running any other kind of program.
Computers don't know or care what instructions they are executing. They deal
with outward forms, not meanings. Switching applications changes the output,
but those changes have meaning only to humans. Consciousness, however,
doesn't depend on how anyone else interprets your actions; it depends on what
you yourself are aware of. And the computer is merely a machine doing what
it's supposed to do--like a clock ticking, an electric motor spinning, an
oven baking. The oven doesn't care what it's baking, or the computer what
it's computing.
The computer's routine never varies: grab an instruction from memory and
execute it; repeat until something makes you stop.
Of course, we can't know literally what it's like to be a computer executing
a long sequence of instructions. But we know what it's like to be a human
doing the same. Imagine holding a deck of cards. You sort the deck; then you
shuffle it and sort it again. Repeat the procedure, ad infinitum. You are
doing comparisons (which card comes first?), data movement (slip one card in
front of another), and so on. To know what it's like to be a computer running
a sophisticated AI application, sit down and sort cards all afternoon. That's
what it's like.
If you sort cards long enough and fast enough, will a brand-new conscious
mind (somehow) be created? This is, in effect, what cognitivists believe.
They say that when a computer executes the right combination of primitive
instructions in the right way, a new conscious mind will emerge. So when a
person executes the right combination of primitive instructions in the right
way, a new conscious mind should (also) emerge; there's no operation a
computer can do that a person can't.
Of course, humans are radically slower than computers. Cognitivists argue
that sure, you know what executing low-level instructions slowly is like; but
only when you do them very fast is it possible to create a new conscious
mind. Sometimes, a radical change in execution speed does change the
qualitative outcome. (When you look at a movie frame by frame, no illusion of
motion results. View the frames in rapid succession, and the outcome is
different.) Yet it seems arbitrary to the point of absurdity to insist that
doing many primitive operations very fast could produce consciousness. Why
should it? Why would it? How could it? What makes such a prediction even
remotely plausible?
But even if researchers could make a conscious mind out of software, it
wouldn't do them much good.
Suppose you could build a conscious software mind. Some cognitivists believe
that such a mind, all by itself, is AI's goal. Indeed, this is the message of
the Turing test. A computer can pass Turing's test without ever mingling with
human beings.
But such a mind could communicate with human beings only in a drastically
superficial way.
It would be capable of feeling emotion in principle. But we feel emotions
with our whole bodies, not just our minds; and it has no body. (Of course, we
could say, then build it a humanlike body! But that is a large assignment and
poses bioengineering problems far beyond and outside AI. Or we could build
our new mind a body unlike a human one. But in that case we couldn't expect
its emotions to be like ours, or to establish a common ground for
communication.)
Consider the low-energy listlessness that accompanies melancholy, the
overflowing jump-for-joy sensation that goes with elation, the pounding heart
associated with anxiety or fear, the relaxed calm when we are happy, the
obvious physical manifestations of excitement--and other examples, from rage
to panic to pity to hunger, thirst, tiredness, and other conditions that are
equally emotions and bodily states. In all these cases, your mind and body
form an integrated whole. No mind that lacked a body like yours could
experience these emotions the way you do.
No such mind could even grasp the word "itch."
In fact, even if we achieved the bioengineering marvel of a synthetic human
body, our problems wouldn't be over. Unless this body experienced infancy,
childhood, and adolescence, as humans do--unless it could grow up, as a
member of human society--how could it understand what it means to "feel like
a kid in a candy shop" or to "wish I were 16 again"? How could it grasp the
human condition in its most basic sense?
A mind-in-a-box, with no body of any sort, could triumphantly pass the Turing
test--which is one index of the test's superficiality. Communication with
such a contrivance would be more like a parody of conversation than the real
thing. (Even in random Internet chatter, all parties know what it's like to
itch, and scratch, and eat, and be a child.) Imagine talking to someone who
happens to be as articulate as an adult but has less experience than a
six-week-old infant. Such a "conscious mind" has no advantage, in itself,
over a mere unconscious intelligence.
But there's a solution to these problems. Suppose we set aside the gigantic
chore of building a synthetic human body and make do with a mind-in-a-box or
a mind-in-an-anthropoid-robot, equipped with video cameras and other
sensors--a rough approximation of a human body. Now we choose some person
(say, Joe, age 35) and simply copy all his memories and transfer them into
our software mind. Problem solved. (Of course, we don't know how to do this;
not only do we need a complete transcription of Joe's memories, we need to
translate them from the neural form they take in Joe's brain to the software
form that our software mind understands. These are hard, unsolved problems.
But no doubt we will solve them someday.)
Nonetheless: understand the enormous ethical burden we have now assumed. Our
software mind is conscious (by assumption) just as a human being is; it can
feel pleasure and pain, happiness and sadness, ecstasy and misery. Once we've
transferred Joe's memories into this artificial yet conscious being, it can
remember what it was like to have a human body--to feel spring rain, stroke
someone's face, drink when it was thirsty, rest when its muscles were tired,
and so forth. (Bodies are good for many purposes.) But our software mind has
lost its body--or had it replaced by an elaborate prosthesis. What experience
could be more shattering? What loss could be harder to bear? (Some losses,
granted, but not many.) What gives us the right to inflict such cruel mental
pain on a conscious being?
In fact, what gives us the right to create such a being and treat it like a
tool to begin with? Wherever you stand on the religious or ethical spectrum,
you had better be prepared to tread carefully once you have created
consciousness in the laboratory.
The Cognitivists' Best Argument
But not so fast! say the cognitivists. Perhaps it seems arbitrary and absurd
to assert that a conscious mind can be created if certain simple instructions
are executed very fast; yet doesn't it also seem arbitrary and absurd to
claim that you can produce a conscious mind by gathering together lots of
neurons?
The cognitivist response to my simple thought experiment ("Imagine you're a
computer") might run like this, to judge from a recent book by a leading
cognitivist philosopher, Daniel C. Dennett. Your mind is conscious; yet it's
built out of huge numbers of tiny unconscious elements. There are no raw
materials for creating consciousness except unconscious ones.
Now, compare a neuron and a yeast cell. "A hundred kilos of yeast does not
wonder about Braque," writes Dennett, "... but you do, and you are made of
parts that are fundamentally the same sort of thing as those yeast cells,
only with different tasks to perform." Many neurons add up to a brain, but
many yeast cells don't, because neurons and yeast cells have different tasks
to perform. They are programmed differently.
In short: if we gather huge numbers of unconscious elements together in the
right way and give them the right tasks to perform, then at some point,
something happens, and consciousness emerges. That's how your brain works.
Note that neurons work as the raw material, but yeast cells don't, because
neurons have the right tasks to perform. So why can't we do the same thing
using software elements as raw materials--so long as we give them the right
tasks to perform? Why shouldn't something happen, and yield a conscious mind
built out of software?
Here is the problem. Neurons and yeast cells don't merely have "different
tasks to perform." They perform differently because they are chemically
different.
One water molecule isn't wet; two aren't; three aren't; 100 aren't; but at
some point we cross a threshold, something happens, and the result is a drop
of water. But this trick only works because of the chemistry and physics of
water molecules! It won't work with just any kind of molecule. Nor can you
take just any kind of molecule, give it the right "tasks to perform," and
make it a fit raw material for producing water.
The fact is that the conscious mind emerges when we've collected many neurons
together, not many doughnuts or low-level computer instructions. Why should
the trick work when I substitute simple computer instructions for neurons? Of
course, it might work. But there isn't any reason to believe it would.
My fellow anticognitivist John Searle made essentially this argument in a
paper that referred to the "causal properties" of the brain. His opponents
mocked it as reactionary stuff. They asserted that since Searle is unable to
say just how these "causal properties" work, his argument is null and void.
Which is nonsense again. I don't need to know anything at all about water
molecules to realize that large groups of them yield water, whereas large
groups of krypton atoms don't.
Why the Cognitive Spectrum Is More Exciting than Consciousness
To say that building a useful conscious mind is highly unlikely is not to say
that AI has nothing worth doing. Consciousness has been a "mystery" (as
Turing called it) for thousands of years, but the mind holds other mysteries,
too. Creativity is one of the most important; it's a brick wall that
psychology and philosophy have been banging their heads against for a long
time. Why should two people who seem roughly equal in competence and
intelligence differ dramatically in creativity? It's widely agreed that
discovering new analogies is the root (or one root) of creativity. But how
are new analogies discovered? We don't know. In his 1983 classic The
Modularity of Mind, Jerry Fodor wrote, "It is striking that, while everybody
thinks analogical reasoning is an important ingredient in all sorts of
cognitive achievements that we prize, nobody knows anything about how it
works."
Furthermore, to speak of the mystery of consciousness makes consciousness
sound like an all-or-nothing proposition. But how do we explain the different
kinds of consciousness we experience? "Ordinary" consciousness is different
from your "drifting" state when you are about to fall asleep and you register
external events only vaguely. Both are different from hallucination as
induced by drugs, mental illness--or life. We hallucinate every day, when we
fall asleep and dream.
And how do we explain the difference between a child's consciousness and an
adult's? Or the differences between child-style and adult-style thinking?
Dream thought is different from drifting or free-- associating pre-sleep
thought, which is different from "ordinary" thought. We know that children
tend to think more concretely than adults. Studies have also suggested that
children are better at inventing metaphors. And the keenest of all observers
of human thought, the English Romantic poets, suggest that dreaming and
waking consciousness are less sharply distinguished for children than for
adults. Of his childhood, Wordsworth writes (in one of the most famous short
poems in English), "There was a time when meadow, grove, and stream, / The
earth, and every common sight, / To me did seem / Apparelled in celestial
light, / The glory and the freshness of a dream."
Today's cognitive science and philosophy can't explain any of these mysteries.
The philosophy and science of mind has other striking blind spots, too. AI
researchers have been working for years on common sense. Nonetheless, as
Fodor writes in The Mind Doesn't Work That Way, "the failure of artificial
intelligence to produce successful simulations of routine commonsense
cognitive competences is notorious, not to say scandalous." But the scandal
is wider than Fodor reports. AI has been working in recent years on emotion,
too, but has yet to understand its integral role in thought.
In short, there are many mysteries to explain--and many "cognitive
competences" to understand. AI--and software in general--can profit from
progress on these problems even if it can't build a conscious computer.
These observations lead me to believe that the "cognitive continuum" (or,
equally, the consciousness continuum) is the most important and exciting
research topic in cognitive science and philosophy today.
What is the "cognitive continuum"? And why care about it? Before I address
these questions, let me note that the cognitive continuum is not even a
scientific theory. It is a "prescientific theory"--like "the earth is round."
Anyone might have surmised that the earth is round, on the basis of everyday
observations--especially the way distant ships sink gradually below (or rise
above) the horizon. No special tools or training were required. That the
earth is round leaves many basic phenomena unexplained: the tides, the
seasons, climate, and so on. But unless we know that the earth is round, it's
hard to progress on any of these problems.
The cognitive continuum is the same kind of theory. I don't claim that it's a
millionth as important as the earth's being round. But for me as a student of
human thought, it's at least as exciting.
What is this "continuum"? It's a spectrum (the "cognitive spectrum") with
infinitely many intermediate points between two endpoints.
When you think, the mind assembles thought trains--sequences of distinct
thoughts or memories. (Sometimes one blends into the next, and sometimes our
minds go blank. But usually we can describe the train that has just passed.)
Sometimes our thought trains are assembled--so it seems--under our conscious,
deliberate control. Other times our thoughts wander, and the trains seem to
assemble themselves. If we start with these observations and add a few simple
facts about "cognitive behavior," a comprehensive picture of thought emerges
almost by itself.
Obviously, you must be alert to think analytically. To solve a set of
mathematical equations or follow a proof, you need to focus your attention.
Your concentration declines as you grow tired over the day.
And your mind is in a strange state just before you fall asleep: a
free-associative state in which, rather than following from another
logically, one thought "suggests" the next. In this state, you cannot focus:
if you decide to think about one thing, you soon find yourself thinking about
something else (which was "suggested" by thing one), and then something else,
and so on. In fact, cognitive psychologists have discovered that we start to
dream before we fall asleep. So the mental state right before sleep is the
state of dreaming.
Since we start the day in one state (focused) and finish in another
(free-associating, unfocused), the two must be connected. Over the day, focus
declines--perhaps steadily, perhaps in a series of oscillations.
Which suggests that there is a continuum of mental states between highest
focus and lowest. Your "focus level" is a large factor in determining your
mode of thought (or of consciousness) at any moment. This spectrum must
stretch from highest-focus thought (best for reasoning or analysis) downward
into modes based more on experience or common sense than on abstract
reasoning; down further to the relaxed, drifting thought that might accompany
gazing out a window; down further to the uncontrolled free association that
leads to dreaming and sleep--where the spectrum bottoms out.
Low focus means that your tendency (not necessarily your ability) to
free-associate increases. A wide-awake person can free-associate if he tries;
an exhausted person has to try hard not to free-associate. At the high end,
you concentrate unless you try not to. At the low end, you free-associate
unless you try not to.
Notice that the role of associative recollection--in which one thought or
memory causes you to recall another--increases as you move down-spectrum.
Reasoning works (theoretically) from first principles. But common sense
depends on your recalling a familiar idea or technique, or a previous
experience. When your mind drifts as you look out a window, one recollection
leads to another, and to a third, and onward--but eventually you return to
the task at hand. Once you reach the edge of sleep, though, free association
goes unchecked. And when you dream, one character or scene transforms itself
into another smoothly and illogically--just as one memory transforms itself
into another in free association. Dreaming is free association "from the
inside."
At the high-focus end, you assemble your thought train as if you were
assembling a comic strip or a story- board. You can step back and "see" many
thoughts at once. (To think analytically, you must have your premises, goal,
and subgoals in mind.) At the high-focus end, you manipulate your thoughts as
if they were objects; you control the train.
At the bottom, it's just the opposite. You don't control your thoughts. You
say, "my mind is wandering," as if you and your mind were separate, as if
your thoughts were roaming around by themselves.
If at high focus you manipulate your thoughts "from the outside," at low
focus you step into each thought as if you were entering a room; you inhabit
it.That's what hallucination means. The opposite of high focus, where you
control your thoughts, is hallucination--where your thoughts control you.
They control your perceived environment and experiences; you "inhabit" each
in turn. (We sometimes speak of "surrendering" to sleep; surrendering to your
thoughts is the opposite of controlling them.)
At the high-focus end, your "I" is separate from your thought train,
observing it critically and controlling it. At the low end, your "I" blends
into it (or climbs aboard).
The cognitive continuum is, arguably, the single most important fact about
thought. If we accept its existence, we can explain and can model (say, in
software) the dynamics of thought. Thought styles change throughout the day
as our focus level changes. (Focus levels depend, in turn, partly on
personality and intelligence: some people are capable of higher focus; some
are more comfortable in higher-focus states.)
It also seems logical to surmise that cognitive maturing increases the focus
level you are able to reach and sustain--and therefore increases your ability
and tendency to think abstractly.
Even more important: if we accept the existence of the spectrum, an
explanation and model of analogy discovery--thus, of creativity--falls into
our laps.
As you move down-spectrum, where you inhabit (not observe) your thoughts, you
feel them. In other words, as you move down-spectrum, emotions emerge.
Dreaming, at the bottom, is emotional.
Emotions are a powerful coding or compression device. A bar code can
encapsulate or encode much information. An emotion is a "mental bar code"
that encapsulates a memory. But the function E(m)--the "emotion" function
that takes a memory m and yields the emotion you in particular feel when you
think about m--does not generate unique values. Two different-seeming
memories can produce the same emotion.
How do we invent analogies? What made - Shakespeare write, "Shall I compare
thee to a summer's day?" Shakespeare's lady didn't look like a summer's day.
(And what does a "summer's day" look like?)
An analogy is a two-element thought train--"a summer's day" followed by the
memory of some person. Why should the mind conjure up these two elements in
succession? What links them?
Answer: in some cases (perhaps in many), their "emotional bar codes"
match--or were sufficiently similar that one recalled the other. The lady and
the summer's day made the poet feel the same sort of way.
We experience more emotions than we can name. "Mildly happy," "happy,"
"ebullient," "elated"; our choice of English words is narrow. But how do you
feel when you are about to open your mailbox, expecting a letter that will
probably bring good news but might be crushing? When you see a rhinoceros?
These emotions have no names. But each "represents" or "encodes" some
collection of circumstances. Two experiences that seem to have nothing in
common might awaken--in you only--the same emotion. And you might see,
accordingly, an analogy that no one else ever saw.
The cognitive spectrum suggests that analogies are created by shared
emotion--the linking of two thoughts with shared or similar emotional content.
To build a simulated unconscious mind, we don't need a computer with real
emotions; simulated emotions will do. Achieving them will be hard. So will
representing memories (with all their complex "multi-media" data).
But if we take the route Turing hinted at back in 1950, if we forget about
consciousness and concentrate on the process of thought, there's every reason
to believe that we can get AI back on track--and that AI can produce powerful
software and show us important things about the human mind.
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 163.25.118.134
推
05/03 08:29, , 1F
05/03 08:29, 1F
→
05/03 08:30, , 2F
05/03 08:30, 2F
→
05/03 08:32, , 3F
05/03 08:32, 3F
→
05/03 08:33, , 4F
05/03 08:33, 4F
→
05/03 09:39, , 5F
05/03 09:39, 5F
推
05/03 10:03, , 6F
05/03 10:03, 6F
→
05/03 12:23, , 7F
05/03 12:23, 7F
推
05/03 18:53, , 8F
05/03 18:53, 8F
→
05/03 18:58, , 9F
05/03 18:58, 9F
→
05/03 20:02, , 10F
05/03 20:02, 10F
EngTalk 近期熱門文章
PTT職涯區 即時熱門文章