Physics of the Future_ How Science Will Shape Human Destiny...

Chapter 2, G.o.d created man out of dust, and then "breathed into his nostrils the breath of life, and man became a living soul." According to Greek and Roman mythology, the G.o.ddess Venus could make statues spring to life. Venus, taking pity on the artist Pygmalion when he fell hopelessly in love with his statue, granted his fondest wish and turned the statue into a beautiful woman, Galatea. The G.o.d Vulcan, the blacksmith to the G.o.ds, could even create an army of mechanical servants made of metal that he brought to life.

Powerful magnetic fields are presently expensive to create but may become almost free in the future. This will allow us to reduce friction in our trains and trucks, revolutionizing transportation, and eliminate losses in electrical transmission. This will also allow us to move objects by sheer thought. With tiny supermagnets placed inside different objects, we will be able to move them around almost at will.

In the near future, we will a.s.sume that everything has a tiny chip in it, making it intelligent. In the far future, we will a.s.sume that everything has a tiny superconductor inside it that can generate bursts of magnetic energy, sufficient to move it across a room. a.s.sume, for example, that a table has a superconductor in it. Normally, this superconductor carries no current. But when a tiny electrical current is added, it can create a powerful magnetic field, capable of sending it across the room. By thinking, we should be able to activate the supermagnet embedded within an object and thereby make it move.

In the X-Men X-Men movies, for example, the evil mutants are led by Magneto, who can move enormous objects by manipulating their magnetic properties. In one scene, he even moves the Golden Gate Bridge via the power of his mind. But there are limits to this power. For example, it is difficult to move an object like plastic or paper that has no magnetic properties. (At the end of the first movies, for example, the evil mutants are led by Magneto, who can move enormous objects by manipulating their magnetic properties. In one scene, he even moves the Golden Gate Bridge via the power of his mind. But there are limits to this power. For example, it is difficult to move an object like plastic or paper that has no magnetic properties. (At the end of the first X-Men X-Men movie, Magneto is confined in a jail made completely of plastic.) movie, Magneto is confined in a jail made completely of plastic.) In the future, room-temperature superconductors may be hidden inside common items, even nonmagnetic ones. If a current is turned on within the object, it will become magnetic and hence it can be moved by an external magnetic field that is controlled by your thoughts.

We will also have the power to manipulate robots and avatars by thinking. This means that, as in the movies Surrogates Surrogates and and Avatar, Avatar, we might be able to control the motions of our subst.i.tutes and even feel pain and pressure. This might prove useful if we need a superhuman body to make repairs in outer s.p.a.ce or rescue people in emergencies. Perhaps one day, our astronauts may be safely on earth, controlling superhuman robotic bodies as they move on the moon. We will discuss this more in the next chapter. we might be able to control the motions of our subst.i.tutes and even feel pain and pressure. This might prove useful if we need a superhuman body to make repairs in outer s.p.a.ce or rescue people in emergencies. Perhaps one day, our astronauts may be safely on earth, controlling superhuman robotic bodies as they move on the moon. We will discuss this more in the next chapter.

We should also point out that possessing this telekinetic power is not without risks. As I mentioned before, in the movie Forbidden Planet, Forbidden Planet, an ancient civilization millions of years ahead of ours attains its ultimate dream, the ability to control anything with the power of their minds. As one trivial example of their technology, they created a machine that can turn your thoughts into a 3-D image. You put the device on your head, imagine something, and a 3-D image materializes inside the machine. Although this device seemed impossibly advanced for movie audiences back in the 1950s, this device will be available in the coming decades. Also, in the movie, there was a device that harnessed your mental energy to lift a heavy object. But as we know, we don"t have to wait millions of years for this technology-it"s already here, in the form of a toy. You place EEG electrodes on your head, the toy detects the electrical impulses of your brain, and then it lifts a tiny object, just as in the movie. In the future, many games will be played by sheer thought. Teams may be mentally wired up so that they can move a ball by thinking about it, and the team that can best mentally move the ball wins. an ancient civilization millions of years ahead of ours attains its ultimate dream, the ability to control anything with the power of their minds. As one trivial example of their technology, they created a machine that can turn your thoughts into a 3-D image. You put the device on your head, imagine something, and a 3-D image materializes inside the machine. Although this device seemed impossibly advanced for movie audiences back in the 1950s, this device will be available in the coming decades. Also, in the movie, there was a device that harnessed your mental energy to lift a heavy object. But as we know, we don"t have to wait millions of years for this technology-it"s already here, in the form of a toy. You place EEG electrodes on your head, the toy detects the electrical impulses of your brain, and then it lifts a tiny object, just as in the movie. In the future, many games will be played by sheer thought. Teams may be mentally wired up so that they can move a ball by thinking about it, and the team that can best mentally move the ball wins.

The climax of Forbidden Planet Forbidden Planet may give us pause. Despite the vastness of their technology, the aliens perished because they failed to notice a defect in their plans. Their powerful machines tapped not only into their conscious thoughts but also into their subconscious desires. The savage, long-suppressed thoughts of their violent, ancient evolutionary past sprang back to life, and the machines materialized every subconscious nightmare into reality. On the eve of attaining their greatest creation, this mighty civilization was destroyed by the very technology they hoped would free them from instrumentality. may give us pause. Despite the vastness of their technology, the aliens perished because they failed to notice a defect in their plans. Their powerful machines tapped not only into their conscious thoughts but also into their subconscious desires. The savage, long-suppressed thoughts of their violent, ancient evolutionary past sprang back to life, and the machines materialized every subconscious nightmare into reality. On the eve of attaining their greatest creation, this mighty civilization was destroyed by the very technology they hoped would free them from instrumentality.

For us, however, this is still a distant danger. A device of that magnitude won"t be available until the twenty-second century. However, we face a more immediate concern. By 2100, we will also live in a world populated by robots that have humanlike characteristics. What happens if they become smarter than us?

Will robots inherit the earth? Yes, but they will be our children.

-MARVIN MINSKY

FUTURE OF AI: Rise of the Machines

The G.o.ds of mythology with their divine power could animate the inanimate. According to the Bible, in Genesis, Chapter 2, G.o.d created man out of dust, and then "breathed into his nostrils the breath of life, and man became a living soul." According to Greek and Roman mythology, the G.o.ddess Venus could make statues spring to life. Venus, taking pity on the artist Pygmalion when he fell hopelessly in love with his statue, granted his fondest wish and turned the statue into a beautiful woman, Galatea. The G.o.d Vulcan, the blacksmith to the G.o.ds, could even create an army of mechanical servants made of metal that he brought to life.

Today, we are like Vulcan, forging in our laboratories machines that breathe life not into clay but into steel and silicon. But will it be to liberate the human race or enslave it? If one reads the headlines today, it seems as if the question is already settled: the human race is about to be rapidly overtaken by our own creation.

THE END OF HUMANITY?.

The headline in the New York Times New York Times said it all: "Scientists Worry Machines May Outsmart Man." The world"s top leaders in artificial intelligence (AI) had gathered at the Asilomar conference in California in 2009 to solemnly discuss what happens when the machines finally take over. As in a scene from a Hollywood movie, delegates asked probing questions, such as, What happens if a robot becomes as intelligent as your spouse? said it all: "Scientists Worry Machines May Outsmart Man." The world"s top leaders in artificial intelligence (AI) had gathered at the Asilomar conference in California in 2009 to solemnly discuss what happens when the machines finally take over. As in a scene from a Hollywood movie, delegates asked probing questions, such as, What happens if a robot becomes as intelligent as your spouse?

As compelling evidence of this robotic revolution, people pointed to the Predator drone, a pilotless robot plane that is now targeting terrorists with deadly accuracy in Afghanistan and Pakistan; cars that can drive themselves; and ASIMO, the world"s most advanced robot that can walk, run, climb stairs, dance, and even serve coffee.

Eric Horvitz of Microsoft, an organizer of the conference, noting the excitement surging through the conference, said, "Technologists are providing almost religious visions, and their ideas are resonating in some ways with the same idea of the Rapture." (The Rapture is when true believers ascend to heaven at the Second Coming. The critics dubbed the spirit of the Asilomar conference "the rapture of the nerds.") That same summer, the movies dominating the silver screen seemed to amplify this apocalyptic picture. In Terminator Salvation, Terminator Salvation, a ragtag band of humans battle huge mechanical behemoths that have taken over the earth. In a ragtag band of humans battle huge mechanical behemoths that have taken over the earth. In Transformers: Revenge of the Fallen, Transformers: Revenge of the Fallen, futuristic robots from s.p.a.ce use humans as p.a.w.ns and the earth as a battleground for their interstellar wars. In futuristic robots from s.p.a.ce use humans as p.a.w.ns and the earth as a battleground for their interstellar wars. In Surrogates, Surrogates, people prefer to live their lives as perfect, beautiful, superhuman robots, rather than face the reality of their own aging, decaying bodies. people prefer to live their lives as perfect, beautiful, superhuman robots, rather than face the reality of their own aging, decaying bodies.

Judging from the headlines and the theater marquees, it looks like the last gasp for humans is just around the corner. AI pundits are solemnly asking: Will we one day have to dance behind bars as our robot creations throw peanuts at us, as we do at bears in a zoo? Or will we become lapdogs to our creations?

But upon closer examination, there is less than meets the eye. Certainly, tremendous breakthroughs have been made in the last decade, but things have to be put into perspective.

The Predator, a 27-foot drone that fires deadly missiles at terrorists from the sky, is controlled by a human with a joystick. A human, most likely a young veteran of video games, sits comfortably behind a computer screen and selects the targets. The human, not the Predator, is calling the shots. And the cars that drive themselves are not making independent decisions as they scan the horizon and turn the steering wheel; they are following a GPS map stored in their memory. So the nightmare of fully autonomous, conscious, and murderous robots is still in the distant future.

Not surprisingly, although the media hyped some of the more sensational predictions made at the Asilomar conference, most of the working scientists doing the day-to-day research in artificial intelligence were much more reserved and cautious. When asked when the machines will become as smart as us, the scientists had a surprising variety of answers, ranging from 20 to 1,000 years.

So we have to differentiate between two types of robots. The first is remote-controlled by a human or programmed and pre-scripted like a tape recorder to follow precise instructions. These robots already exist and generate headlines. They are slowly entering our homes and also the battlefield. But without a human making the decisions, they are largely useless pieces of junk. So these robots should not be confused with the second type, which is truly autonomous, the kind that can think for itself and requires no input from humans. It is these autonomous robots that have eluded scientists for the past half century.

ASIMO THE ROBOT.

AI researchers often point to Honda"s robot called ASIMO (Advanced Step in Innovative Mobility) as a graphic demonstration of the revolutionary advances made in robotics. It is 4 feet 3 inches tall, weighs 119 pounds, and resembles a young boy with a black-visored helmet and a backpack. ASIMO, in fact, is remarkable: it can realistically walk, run, climb stairs, and talk. It can wander around rooms, pick up cups and trays, respond to some simple commands, and even recognize some faces. It even has a large vocabulary and can speak in different languages. ASIMO is the result of twenty years of intense work by scores of Honda scientists, who have produced a marvel of engineering.

On two separate occasions, I have had the privilege of personally interacting with ASIMO at conferences, when hosting science specials for BBC/Discovery. When I shook its hand, it responded in an entirely humanlike way. When I waved to it, it waved right back. And when I asked it to fetch me some juice, it turned around and walked toward the refreshment table with eerily human motions. Indeed, ASIMO is so lifelike that when it talked, I half expected the robot to take off its helmet and reveal the boy who was cleverly hidden inside. It can even dance better than I can.

At first, it seems as if ASIMO is intelligent, capable of responding to human commands, holding a conversation, and walking around a room. Actually, the reality is quite different. When I interacted with ASIMO in front of the TV camera, every motion, every nuance was carefully scripted. In fact, it took about three hours to film a simple five-minute scene with ASIMO. And even that required a team of ASIMO handlers who were furiously reprogramming the robot on their laptops after we filmed every scene. Although ASIMO talks to you in different languages, it is actually a tape recorder playing recorded messages. It simply parrots what is programmed by a human. Although ASIMO becomes more sophisticated every year, it is incapable of independent thought. Every word, every gesture, every step has to be carefully rehea.r.s.ed by ASIMO"s handlers.

Afterward, I had a candid talk with one of ASIMO"s inventors, and he admitted that ASIMO, despite its remarkably humanlike motions and actions, has the intelligence of an insect. Most of its motions have to be carefully programmed ahead of time. It can walk in a totally lifelike way, but its path has to be carefully programmed or it will stumble over the furniture, since it cannot really recognize objects around the room.

By comparison, even a c.o.c.kroach can recognize objects, scurry around obstacles, look for food and mates, evade predators, plot complex escape routes, hide among the shadows, and disappear in the cracks, all within a matter of seconds.

AI researcher Thomas Dean of Brown University has admitted that the lumbering robots he is building are "just at the stage where they"re robust enough to walk down the hall without leaving huge gouges in the plaster." As we shall later see, at present our most powerful computers can barely simulate the neurons of a mouse, and then only for a few seconds. It will take many decades of hard work before robots become as smart as a mouse, rabbit, dog or cat, and then a monkey.

HISTORY OF AI.

Critics sometimes point out a pattern, that every thirty years, AI pract.i.tioners claim that superintelligent robots are just around the corner. Then, when there is a reality check, a backlash sets in.

In the 1950s, when electronic computers were first introduced after World War II, scientists dazzled the public with the notion of machines that could perform miraculous feats: picking up blocks, playing checkers, and even solving algebra problems. It seemed as if truly intelligent machines were just around the corner. The public was amazed; and soon there were magazine articles breathlessly predicting the time when a robot would be in everyone"s kitchen, cooking dinner, or cleaning the house. In 1965, AI pioneer Herbert Simon declared, "Machines will be capable, within twenty years, of doing any work a man can do." But then the reality set in. Chess-playing machines could not win against a human expert, and could play only chess, nothing more. These early robots were like a one-trick pony, performing just one simple task.

In fact, in the 1950s, real breakthroughs were made in AI, but because the progress was vastly overstated and overhyped, a backlash set in. In 1974, under a chorus of rising criticism, the U.S. and British governments cut off funding. The first AI winter set in.

Today, AI researcher Paul Abrahams shakes his head when he looks back at those heady times in the 1950s when he was a graduate student at MIT and anything seemed possible. He recalled, "It"s as though a group of people had proposed to build a tower to the moon. Each year they point with pride at how much higher the tower is than it was the previous year. The only trouble is that the moon isn"t getting much closer."

In the 1980s, enthusiasm for AI peaked once again. This time the Pentagon poured millions of dollars into projects like the smart truck, which was supposed to travel behind enemy lines, do reconnaissance, rescue U.S. troops, and return to headquarters, all by itself. The j.a.panese government even put its full weight behind the ambitious Fifth Generation Computer Systems Project, sponsored by the powerful j.a.panese Ministry of International Trade and Industry. The Fifth Generation Project"s goal was, among others, to have a computer system that could speak conversational language, have full reasoning ability, and even antic.i.p.ate what we want, all by the 1990s.

Unfortunately, the only thing that the smart truck did was get lost. And the Fifth Generation Project, after much fanfare, was quietly dropped without explanation. Once again, the rhetoric far outpaced the reality. In fact, there were real gains made in AI in the 1980s, but because progress was again overhyped, a second backlash set in, creating the second AI winter, in which funding again dried up and disillusioned people left the field in droves. It became painfully clear that something was missing.

In 1992 AI researchers had mixed feelings holding a special celebration in honor of the movie 2001, 2001, in which a computer called HAL 9000 runs amok and slaughters the crew of a s.p.a.ceship. The movie, filmed in 1968, predicted that by 1992 there would be robots that could freely converse with any human on almost any topic and also command a s.p.a.ceship. Unfortunately, it was painfully clear that the most advanced robots had a hard time keeping up with the intelligence of a bug. in which a computer called HAL 9000 runs amok and slaughters the crew of a s.p.a.ceship. The movie, filmed in 1968, predicted that by 1992 there would be robots that could freely converse with any human on almost any topic and also command a s.p.a.ceship. Unfortunately, it was painfully clear that the most advanced robots had a hard time keeping up with the intelligence of a bug.

In 1997 IBM"s Deep Blue accomplished a historic breakthrough by decisively beating the world chess champion Gary Kasparov. Deep Blue was an engineering marvel, computing 11 billion operations per second. However, instead of opening the floodgates of artificial intelligence research and ushering in a new age, it did precisely the opposite. It highlighted only the primitiveness of AI research. Upon reflection, it was obvious to many that Deep Blue could not think. It was superb at chess but would score 0 on an IQ exam. After this victory, it was the loser, Kasparov, who did all the talking to the press, since Deep Blue could not talk at all. Grudgingly, AI researchers began to appreciate the fact that brute computational power does not equal intelligence. AI researcher Richard Heckler says, "Today, you can buy chess programs for $49 that will beat all but world champions, yet no one thinks they"re intelligent."

But with Moore"s law spewing out new generations of computers every eighteen months, sooner or later the old pessimism of the past generation will be gradually forgotten and a new generation of bright enthusiasts will take over, creating renewed optimism and energy in the once-dormant field. Thirty years after the last AI winter set in, computers have advanced enough so that the new generation of AI researchers are again making hopeful predictions about the future. The time has finally come for AI, say its supporters. This time, it"s for real. The third try is the lucky charm. But if they are right, are humans soon to be obsolete?

IS THE BRAIN A DIGITAL COMPUTER?.

One fundamental problem, as mathematicians now realize, is that they made a crucial error fifty years ago in thinking the brain was a.n.a.logous to a large digital computer. But now it is painfully obvious that it isn"t. The brain has no Pentium chip, no Windows operating system, no application software, no CPU, no programming, and no subroutines that typify a modern digital computer. In fact, the architecture of digital computers is quite different from that of the brain, which is a learning machine of some sort, a collection of neurons that constantly rewires itself every time it learns a task. (A PC, however, does not learn at all. Your computer is just as dumb today as it was yesterday.) So there are at least two approaches to modeling the brain. The first, the traditional top-down approach, is to treat robots like digital computers, and program all the rules of intelligence from the very beginning. A digital computer, in turn, can be broken down into something called a Turing machine, a hypothetical device introduced by the great British mathematician Alan Turing. A Turing machine consists of three basic components: an input, a central processor that digests this data, and an output. All digital computers are based on this simple model. The goal of this approach is to have a CD-ROM that has all the rules of intelligence codified on it. By inserting this disk, the computer suddenly springs to life and becomes intelligent. So this mythical CD-ROM contains all the software necessary to create intelligent machines.

However, our brain has no programming or software at all. Our brain is more like a "neural network," a complex jumble of neurons that constantly rewires itself.

Neural networks follow Hebb"s rule: every time a correct decision is made, those neural pathways are reinforced. It does this by simply changing the strength of certain electrical connections between neurons every time it successfully performs a task. (Hebb"s rule can be expressed by the old question: How does a musician get to Carnegie Hall? Answer: practice, practice, practice. For a neural network, practice makes perfect. Hebb"s rule also explains why bad habits are so difficult to break, since the neural pathway for a bad habit is so well-worn.) Neural networks are based on the bottom-up approach. Instead of being spoon-fed all the rules of intelligence, neural networks learn them the way a baby learns, by b.u.mping into things and learning by experience. Instead of being programmed, neural networks learn the old-fashioned way, through the "school of hard knocks."

Neural networks have a completely different architecture from that of digital computers. If you remove a single transistor in the digital computer"s central processor, the computer will fail. However, if you remove large chunks of the human brain, it can still function, with other parts taking over for the missing pieces. Also, it is possible to localize precisely where the digital computer "thinks": its central processor. However, scans of the human brain clearly show that thinking is spread out over large parts of the brain. Different sectors light up in precise sequence, as if thoughts were being bounced around like a Ping-Pong ball.

Digital computers can calculate at nearly the speed of light. The human brain, by contrast, is incredibly slow. Nerve impulses travel at an excruciatingly slow pace of about 200 miles per hour. But the brain more than makes up for this because it is ma.s.sively parallel, that is, it has 100 billion neurons operating at the same time, each one performing a tiny bit of computation, with each neuron connected to 10,000 other neurons. In a race, a superfast single processor is left in the dust by a superslow parallel processor. (This goes back to the old riddle: if one cat can eat one mouse in one minute, how long does it take a million cats to eat a million mice? Answer: one minute.) In addition, the brain is not digital. Transistors are gates that can either be open or closed, represented by a 1 or 0. Neurons, too, are digital (they can fire or not fire), but they can also be a.n.a.log, transmitting continuous signals as well as discrete ones.

TWO PROBLEMS WITH ROBOTS.

Given the glaring limitations of computers compared to the human brain, one can appreciate why computers have not been able to accomplish two key tasks that humans perform effortlessly: pattern recognition and common sense. These two problems have defied solution for the past half century. This is the main reason why we do not have robot maids, butlers, and secretaries.

The first problem is pattern recognition. Robots can see much better than a human, but they don"t understand what they are seeing. When a robot walks into a room, it converts the image into a jumble of dots. By processing these dots, it can recognize a collection of lines, circles, squares, and rectangles. Then a robot tries to match this jumble, one by one, with objects stored in its memory-an extraordinarily tedious task even for a computer. After many hours of calculation, the robot may match these lines with chairs, tables, and people. By contrast, when we walk into a room, within a fraction of a second, we recognize chairs, tables, desks, and people. Indeed, our brains are mainly pattern-recognizing machines.

Second, robots do not have common sense. Although robots can hear much better than a human, they don"t understand what they are hearing. For example, consider the following statements: *Children like sweets but not punishment *Strings can pull but not push *Sticks can push but not pull *Animals cannot speak and understand English *Spinning makes people feel dizzy

For us, each of these statements is just common sense. But not to robots. There is no line of logic or programming that proves that strings can pull but not push. We have learned the truth of these "obvious" statements by experience, not because they were programmed into our memories.

The problem with the top-down approach is that there are simply too many lines of code for common sense necessary to mimic human thought. Hundreds of millions of lines of code, for example, are necessary to describe the laws of common sense that a six-year-old child knows. Hans Moravec, former director of the AI laboratory at Carnegie Mellon, laments, "To this day, AI programs exhibit no shred of common sense-amedical diagnosis program, for instance, may prescribe an antibiotic when presented a broken bicycle because it lacks a model of people, disease, or bicycles."

Some scientists, however, cling to the belief that the only obstacle to mastering common sense is brute force. They feel that a new Manhattan Project, like the program that built the atomic bomb, would surely crack the common-sense problem. The crash program to create this "encyclopedia of thought" is called CYC, started in 1984. It was to be the crowning achievement of AI, the project to encode all the secrets of common sense into a single program. However, after several decades of hard work, the CYC project has failed to live up to its own goals.

CYC"s goal is simple: master "100 million things, about the number a typical person knows about the world, by 2007." That deadline, and many previous ones, have slipped by without success. Each of the milestones laid out by CYC engineers has come and gone without scientists being any closer to mastering the essence of intelligence.

MAN VERSUS MACHINE.

I once had a chance to match wits with a robot in a contest with one built by MIT"s Tomaso Poggio. Although robots cannot recognize simple patterns as we can, Poggio was able to create a computer program that can calculate every bit as fast as a human in one specific area: "immediate recognition." This is our uncanny ability to instantly recognize an object even before we are aware of it. (Immediate recognition was important for our evolution, since our ancestors had only a split second to determine if a tiger was lurking in the bushes, even before they were fully aware of it.) For the first time, a robot consistently scored higher than a human on a specific vision recognition test.

The contest between me and the machine was simple. First, I sat in a chair and stared at an ordinary computer screen. Then a picture flashed on the screen for a split second, and I was supposed to press one of two keys as fast as I could, if I saw an animal in the picture or not. I had to make a decision as quickly as possible, even before I had a chance to digest the picture. The computer would also make a decision for the same picture.

Embarra.s.singly enough, after many rapid-fire tests, the machine and I performed about equally. But there were times when the machine scored significantly higher than I did, leaving me in the dust. I was beaten by a machine. (It was one consolation when I was told that the computer gets the right answer 82 percent of the time, but humans score only 80 percent on average.) The key to Poggio"s machine is that it copies lessons from Mother Nature. Many scientists are realizing the truth in the statement, "The wheel has already been invented, so why not copy it?" For example, normally when a robot looks at a picture, it tries to divide it up into a series of lines, circles, squares, and other geometric shapes. But Poggio"s program is different.

When we see a picture, we might first see the outlines of various objects, then see various features within each object, then shading within these features, etc. So we split up the image into many layers. As soon as the computer processes one layer of the image, it integrates it with the next layer, and so on. In this way, step by step, layer by layer, it mimics the hierarchical way that our brains process images. (Poggio"s program cannot perform all the feats of pattern recognition that we take for granted, such as visualizing objects in 3-D, recognizing thousands of objects from different angles, etc., but it does represent a major milestone in pattern recognition.) Later, I had an opportunity to see both the top-down and bottom-up approaches in action. I first went to the Stanford University"s artificial intelligence center, where I met STAIR (Stanford artificial intelligence robot), which uses the top-down approach. STAIR is about 4 feet tall, with a huge mechanical arm that can swivel and grab objects off a table. STAIR is also mobile, so it can wander around an office or home. The robot has a 3-D camera that locks onto an object and feeds the 3-D image into a computer, which then guides the mechanical arm to grab the object. Robots have been grabbing objects like this since the 1960s, and we see them in Detroit auto factories.

But appearances are deceptive. STAIR can do much more. Unlike the robots in Detroit, STAIR is not scripted. It operates by itself. If you ask it to pick up an orange, for example, it can a.n.a.lyze a collection of objects on a table, compare them with the thousands of images already stored in its memory, then identify the orange and pick it up. It can also identify objects more precisely by grabbing them and turning them around.

To test its ability, I scrambled a group of objects on a table, and then watched what happened after I asked for a specific one. I saw that STAIR correctly a.n.a.lyzed the new arrangement and then reached out and grabbed the correct thing. Eventually, the goal is to have STAIR navigate in home and office environments, pick up and interact with various objects and tools, and even converse with people in a simplified language. In this way, it will be able to do anything that a gofer can in an office. STAIR is an example of the top-down approach: everything is programmed into STAIR from the very beginning. (Although STAIR can recognize objects from different angles, it is still limited in the number of objects it can recognize. It would be paralyzed if it had to walk outside and recognize random objects.) Later, I had a chance to visit New York University, where Yann LeCun is experimenting with an entirely different design, the LAGR (learning applied to ground robots). LAGR is an example of the bottom-up approach: it has to learn everything from scratch, by b.u.mping into things. It is the size of a small golf cart and has two stereo color cameras that scan the landscape, identifying objects in its path. It then moves among these objects, carefully avoiding them, and learns with each pa.s.s. It is equipped with GPS and has two infrared sensors that can detect objects in front of it. It contains three high-power Pentium chips and is connected to a gigabit Ethernet network. We went to a nearby park, where the LAGR robot could roam around various obstacles placed in its path. Every time it went over the course, it got better at avoiding the obstacles.

One important difference between LAGR and STAIR is that LAGR is specifically designed to learn. Every time LAGR b.u.mps into something, it moves around the object and learns to avoid that object the next time. While STAIR has thousands of images stored in its memory, LAGR has hardly any images in its memory but instead creates a mental map of all the obstacles it meets, and constantly refines that map with each pa.s.s. Unlike the driverless car, which is programmed and follows a route set previously by GPS, LAGR moves all by itself, without any instructions from a human. You tell it where to go, and it takes off. Eventually, robots like these may be found on Mars, the battlefield, and in our homes.

On one hand, I was impressed by the enthusiasm and energy of these researchers. In their hearts, they believe that they are laying the foundation for artificial intelligence, and that their work will one day impact society in ways we can only begin to understand. But from a distance, I could also appreciate how far they have to go. Even c.o.c.kroaches can identify objects and learn to go around them. We are still at the stage where Mother Nature"s lowliest creatures can outsmart our most intelligent robots.

EXPERT SYSTEMS.

Today, many people have simple robots in their homes that can vacuum their carpets. There are also robot security guards patrolling buildings at night, robot guides, and robot factory workers. In 2006, it was estimated that there were 950,000 industrial robots and 3,540,000 service robots working in homes and buildings. But in the coming decades, the field of robotics may blossom in several directions. But these robots won"t look like the ones of science fiction.

The greatest impact may be felt in what are called expert systems, software programs that have encoded in them the wisdom and experience of a human being. As we saw in the last chapter, one day, we may talk to the Internet on our wall screens and converse with the friendly face of a robodoc or robolawyer.

This field is called heuristics, that is, following a formal, rule-based system. When we need to plan a vacation, we will talk to the face in the wall screen and give it our preferences for the vacation: how long, where to, which hotels, what price range. The expert system will already know our preferences from past experiences and then contact hotels, airlines, etc., and give us the best options. But instead of talking to it in a chatty, gossipy way, we will have to use a fairly formal, stylized language that it understands. Such a system can rapidly perform any number of useful ch.o.r.es. You just give it orders, and it makes a reservation at a restaurant, checks for the location of stores, orders grocery and takeout, reserves a plane ticket, etc.

It is precisely because of the advances in heuristics over the past decades that we now have some of the rather simple search engines of today. But they are still crude. It is obvious to everyone that you are dealing with a machine and not a human. In the future, however, robots will become so sophisticated that they will almost appear to be humanlike, operating seamlessly with nuance and sophistication.

Perhaps the most practical application will be in medical care. For example, at the present time if you feel sick, you may have to wait hours in an emergency room before you see a doctor. In the near future, you may simply go to your wall screen and talk to robodoc. You will be able to change the face, and even the personality, of the robodoc that you see with the push of a b.u.t.ton. The friendly face you see in your wall screen will ask a simple set of questions: How do you feel? Where does it hurt? When did the pain start? How often does it hurt?

Each time, you will respond by choosing from a simple set of answers. You will answer not by typing on a keyboard but by speaking.

Each of your answers, in turn, will prompt the next set of questions. After a series of such questions, the robodoc will be able to give you a diagnosis based on the best experience of the world"s doctors. Robodoc will also a.n.a.lyze the data from your bathroom, your clothes, and furniture, which have been continually monitoring your health via DNA chips. And it might ask you to examine your body with a portable MRI scanner, which is then a.n.a.lyzed by supercomputers. (Some primitive versions of these heuristic programs already exist, such as WebMD, but they lack the nuances and full power of heuristics.) The majority of visits to the doctor"s office can be eliminated in this way, greatly relieving the stress on our health care system. If the problem is serious, the robodoc will recommend that you go to a hospital, where human doctors can provide intensive care. But even there, you will see AI programs, in the form of robot nurses, like ASIMO. These robot nurses are not truly intelligent but can move from one hospital room to another, administer the proper medicines to patients, and attend to their other needs. They can move on rails in the floor, or move independently like ASIMO.

One robot nurse that already exists is the RP-6 mobile robot, which is being deployed in hospitals such as the UCLA Medical Center. It is basically a TV screen sitting on top of a mobile computer that moves on rollers. In the TV screen, you see the video face of a real physician who may be miles away. There is a camera on the robot that allows the doctor to see what the robot is looking at. There is also a microphone so that the doctor can speak to the patient. The doctor can remotely control the robot via a joystick, interact with patients, monitor drugs, etc. Since annually 5 million patients in the United States are admitted to intensive care units, but only 6,000 physicians are qualified to handle critically ill patients, robots such as this could help to alleviate this crisis in emergency care, with one doctor attending to many patients. In the future, robots like this may become more autonomous, able to navigate on their own and interact with patients.

j.a.pan is one of the world"s leaders in this technology. j.a.pan is spending so much money on robots to alleviate the coming crisis in medical care. In retrospect, it is not surprising that j.a.pan is one of the leading nations in robotics, for several reasons. First, in the Shinto religion, inanimate objects are believed to have spirits in them. Even mechanical ones. In the West, children may scream in terror at robots, especially after seeing so many movies about rampaging killing machines. But to j.a.panese children, robots are seen as kindred spirits, playful and helpful. In j.a.pan, it is not uncommon to see robot receptionists greet you when you enter department stores. In fact, 30 percent of all commercial robots in the world are in j.a.pan.

Second, j.a.pan is facing a demographic nightmare. j.a.pan has the fastest-aging population. The birthrate has fallen to an astonishing 1.2 children per family, and immigration is negligible. Some demographers have stated that we are watching a train wreck in slow motion: one demographic train (aging population and falling birthrate) will soon collide with another (low immigration rate) in the coming years. (This same train wreck might eventually happen in Europe as well.) This will be felt most acutely in the medical field, where an ASIMO-like nurse may be quite useful. Robots like ASIMO would be ideal for hospital tasks, such as fetching medicines, administering drugs, and monitoring patients twenty-four hours a day.

MODULAR ROBOTS.

By midcentury, our world may be full of robots, but we might not even notice them. That is because most robots probably won"t have human form. They might be hidden from view, disguised as snakes, insects, and spiders, performing unpleasant but crucial tasks. These will be modular robots that can change shape depending on the task.

I had a chance to meet one of the pioneers in modular robots, Wei-min Shen of the University of Southern California. His idea is to create small cubical modules that you can interchange like Lego blocks and rea.s.semble at will. He calls them polymorphic robots since they can change shape, geometry, and function. In his laboratory, I could instantly see the difference between his approach and that of Stanford and MIT. On the surface, both those labs resembled a kid"s dream playhouse, with walking, talking robots everywhere you looked. When I visited Stanford"s and MIT"s AI laboratories, I saw a wide variety of robotic "toys" that have chips in them and some intelligence. The workbenches are full of robot airplanes, helicopters, trucks, and insect-shaped robots with chips inside, all moving autonomously. Each robot is a self-contained unit.

Various types of robots: LAGR (top), STAIR (bottom left), and ASIMO (bottom right). In spite of vast increases in computer power, these robots have the intelligence of a c.o.c.kroach. (photo credit 2.1)

© 2024 www.topnovel.cc