Creating Artificial Intelligence Based on the Real Thing

Ever since the early days of modern computing in the 1940s, the biological metaphor has been irresistible. The first computers — room-size behemoths — were referred to as “giant brains” or “electronic brains,” in headlines and everyday speech. As computers improved and became capable of some tasks familiar to humans, like playing chess, the term used was “artificial intelligence.” DNA, it is said, is the original software. For the most part, the biological metaphor has long been just that — a simplifying analogy rather than a blueprint for how to do computing. Engineering, not biology, category_ided the pursuit of artificial intelligence. As Frederick Jelinek, a pioneer in speech recognition, put it, “airplanes don’t flap their wings.” Yet the principles of biology are gaining ground as a tool in computing. The shift in thinking results from advances in neuroscience and computer science, and from the prod of necessity.

The physical limits of conventional computer designs are within sight — not today or tomorrow, but soon enough. Nanoscale circuits cannot shrink much further. Today’s chips are power hogs, running hot, which curbs how much of a chip’s circuitry can be used. These limits loom as demand is accelerating for computing capacity to make sense of a surge of new digital data from sensors, online commerce, social networks, video streams and corporate and government databases. To meet the challenge, without gobbling the world’s energy supply, a different approach will be needed. And biology, scientists say, promises to contribute more than metaphors. “Every time we look at this, biology provides a clue as to how we should pursue the frontiers of computing,” said John E. Kelly, the director of research at I.B.M. Dr. Kelly points to Watson, the question-answering computer that can play “Jeopardy!” and beat two human champions earlier this year. I.B.M.’s clever machine consumes 85,000 watts of electricity, while the human brain runs on just 20 watts. “Evolution figured this out,” Dr. Kelly said. Several biologically inspired paths are being explored by computer scientists in universities and corporate laboratories worldwide. But researchers from I.B.M. and four universities — Cornell, Columbia, the University of Wisconsin, and the University of California, Merced — are engaged in a project that seems particularly intriguing.

The project, a collaboration of computer scientists and neuroscientists begun three years ago, has been encouraging enough that in August it won a $21 million round of government financing from the Defense Advanced Research Projects Agency, bringing the total to $41 million in three rounds. In recent months, the team has developed prototype “neurosynaptic” microprocessors, or chips that operate more like neurons and synapses than like conventional semiconductors. But since 2008, the project itself has evolved, becoming more focused, if not scaled back. Its experience suggests what designs, concepts and techniques might be usefully borrowed from biology to push the boundaries of computing, and what cannot be applied, or even understood. At the outset, Dharmendra S. Modha, the I.B.M. computer scientist leading the project, described the research grandly as “the quest to engineer the mind by reverse-engineering the brain.” The project embarked on supercomputer simulations intended to equal the complexity of animal brains — a cat and then a monkey. In science blogs and online forums, some neuroscientists sharply criticized I.B.M. for what they regarded as exaggerated claims of what the project could achieve. These days at the I.B.M. Almaden Research Center in San Jose, Calif., there is not a lot of talk of reverse-engineering the brain. Wide-ranging ambitions that narrow over time, Dr. Modha explained, are part of research and discovery, even if his earlier rhetoric was inflated or misunderstood. “Deciding what not to do is just as important as deciding what to do,” Dr. Modha said. “We’re not trying to replicate the brain. That’s impossible. We don’t know how the brain works, really.”

The discussion and debate across disciplines has helped steer the research, as the team pursues the goals set out by Darpa, the Pentagon’s research agency. The technology produced, according to the category_idelines, should have the characteristics of being self-organizing, able to “learn” instead of merely responding to conventional programming commands, and consuming very little power. The concept of neuromorphic electronic systems is more than two decades old; Carver Mead, a renowned computer scientist, described such devices in an engineering journal article in 1990. Earlier biologically inspired devices, scientists say, were mostly analog, single-purpose sensors that mimicked one function, like an electronic equivalent of a retina for sensing image data. But the I.B.M. and university researchers are pursuing a more versatile digital technology. “It seems that we can build a computing architecture that is quite general-purpose and could be used for a large class of applications,” said Rajit Manohar, a professor of electrical and computer engineering at Cornell University.

SOURCE

Leave a Reply

Your email address will not be published. Required fields are marked *