All The Little Switches


Back when computers first came to be, a small group of researchers had a vision of computation that went quite beyond simulating nuclear explosions.  One of these scientists was Douglas Hofstadter, author of the classic book Gödel, Escher, Bach, on the emergence of intelligence from non-intelligent components – or the emergence of a consciousness from a mass of neurons.  Early artificial intelligence researchers saw an incredible connection between computers and the mind, between giant, room-sized machines full of vacuum tubes and the mass of chemicals and electrical signals blasting around inside your skull.

A neuron can (sort of) be seen as a switch, in that it takes in signals on one end, and if certain criteria are met, sends a new signal on the other end.  In a modern computer, the corresponding component is the transistor, a switch that is the basic element of all electronic logic. The transistor takes in two signals, one being a message, and one being an “on/off” switch.  If the switch is on, the transistor forwards the message down the wire; otherwise, the transistor is silent.

A single transistor is of limited use, but a billion of them arranged in an intelligent way can build a computer. In a similar way, a neuron in isolation will not do anything unpredictable. But, a hundred billion of them connected within a body can learn, ponder, laugh, do math, and occasionally write blog posts.  However, as far as I can tell, my laptop does not have a sense of humor, nor is it especially independent. So what, exactly, is it lacking?  This question has proved extremely difficult to answer, despite high expectations for the computer-fueled merger between artificial intelligence and neuroscience.

One of the key questions in computer science is how to create a machine that understands human language.  Unlike vision or hearing, which are complicated but basically straightforward tasks, combining memory, inference, and an implicit understanding of grammar together in real time is quite a feat – but one that human children have no difficulty with.  Computers, on the other hand, find it nearly impossible.

An early attempt at the language problem was Eliza, a mechanical therapist who probably won’t do much more than entertain you.  Since then, language processing has become much more advanced, but in a highly mechanical way. There are grammar checkers and statistical sentence parsers, but there is no system that can pull a theme out of a work of literature, or an emotion out of a work of art.

It is tempting to say that these highly subjective tasks are somehow exclusive to humans and will not be replicated by a machine. But, this kind of thinking provides nothing but an intellectual dead end, and betting against science is generally an unwise move.  There is a kind of magic going on in the brain.  We don’t understand it yet, but even the ability to try to is quite remarkable. As far as we can tell, the brain is the only object in nature capable of considering itself.




Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <p> <div> <br> <sup> <sub>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.