Artificial general intelligence: Are we close, and does it even make sense to try?

MIT_MagicalThinking_Sketch.jpgresize1200600.jpeg

Sometimes Legg talks about AGI as a kind of multi-tool—one machine that solves many different problems, without a new one having to be designed for each additional challenge. On that view, it wouldn’t be any more intelligent than AlphaGo or GPT-3; it would just have more capabilities. It would be a general-purpose AI, not a full-fledged intelligence. But he also talks about a machine you could interact with as if it were another person. He describes a kind of ultimate playmate: “It would be wonderful to interact with a machine and show it a new card game and have it understand and ask you questions and play the game with you,” he says. “It would be a dream come true.”

When people talk about AGI, it is typically these human-like abilities that they have in mind.  Thore Graepel, a colleague of Legg’s at DeepMind, likes to use a quote from science fiction author Robert Heinlein, which seems to mirror Minsky’s words: “A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”

And yet, fun fact: Graepel’s go-to description is spoken by a character called Lazarus Long in Heinlein’s 1973 novel Time Enough for Love. Long is a superman of sorts, the result of a genetic experiment that lets him live for hundreds of years. During that extended time, Long lives many lives and masters many skills. In other words, Minsky describes the abilities of a typical human; Graepel does not. 

The goalposts of the search for AGI are constantly shifting in this way. What do people mean when they talk of human-like artificial intelligence—human like you and me, or human like Lazarus Long? For Pesenti, this ambiguity is a problem. “I don’t think anybody knows what it is,” he says. “Humans can’t do everything. They can’t solve every problem—and they can’t make themselves better.”

Professional 'Go' Player Lee Se-dol Plays Google's AlphaGo - Last Day
Go champion Lee Sedol (left) shakes hands with DeepMind co-founder Demis Hassabis

GETTY

So what might an AGI be like in practice? Calling it “human-like” is at once vague and too specific. Humans are the best example of general intelligence we have, but humans are also highly specialized. A quick glance across the varied universe of animal smarts—from the collective cognition seen in ants to the problem-solving skills of crows or octopuses to the more recognizable but still alien intelligence of chimpanzees—shows that there are many ways to build a general intelligence. 

Even if we do build an AGI, we may not fully understand it. Today’s machine-learning models are typically “black boxes,” meaning they arrive at accurate results through paths of calculation no human can make sense of. Add self-improving superintelligence to the mix and it’s clear why science fiction often provides the easiest analogies. 

Credit: Source link