The new field of advanced materials science involves the study, manipulation, and fabrication of solid matter with highly sophisticated tools, almost on an atom-by-atom basis. It involves many interdisciplinary fields, including engineering, physics, chemistry, and biology. The new insights being developed into the ways that molecules control and direct basic functions in biology, chemistry, and the interaction of atomic and subatomic processes that form solid matter is speeding up the emergence of what some experts are calling the molecular economy.

Significantly, the new molecules and materials created need not be evaluated through the traditional, laborious process of trial and error. Advanced supercomputers are now capable of simulating the way these novel creations interact with other molecules and materials, allowing the selection of only the ones that are most promising for experiments in the real world. Indeed, the new field known as computational science has now been recognized as a third basic form of knowledge creation-alongside inductive reasoning and deductive reasoning-and combines elements of the first two by simulating an artificial reality that functions as a much more concrete form of hypothesis and allows detailed experimentation to examine the new materials" properties and a.n.a.lyze how they interact with other molecules and materials.

The properties of matter at the nanometer scale (between one and 100 nanometers) often differ significantly from the properties of the same atoms and molecules when they are cl.u.s.tered in bulk. These differences have allowed technologists to use nanomaterials on the surfaces of common products in order to eliminate rust, enhance resistance to scratches and dents, and in clothes to enhance resistance to stains, wrinkles, and fire. The single most common application thus far is the use of nanoscale silver to destroy microbes-a use that is particularly important for doctors and hospitals guarding against infections.

The longer-term significance that attaches to the emergence of an entirely new group of basic materials with superior properties is reflected in the names historians give to the ages of technological achievement in human societies: the Stone Age, the Bronze Age, and the Iron Age. As was true of the historical stages of economic development that began with the long hunter-gatherer period, the first of these periods-the Stone Age-was by far the longest.

Archaeologists disagree on when and where the reliance on stone tools gave way to the first metallurgical technologies. The first smelting of copper is believed to have taken place in eastern Serbia approximately 7,000 years ago, though objects made of cast copper emerged in numerous locations in the same era.

The more sophisticated creation of bronze-which is much less brittle and much more useful for many purposes than copper-involves a process in which tin is added to molten copper, a technique that combines high temperatures and some pressurization. Bronze was first created 5,000 years ago in both Greece and China, and more than 1,000 years later in Britain.

Though the first iron artifacts date back 4,500 years ago in northern Turkey, the Iron Age began between 3,000 and 3,200 years ago with the development of better furnaces that achieved higher temperatures capable of heating iron ore into a malleable state from which it could be made into tools and weapons. Iron, of course, is much harder and stronger than bronze. Steel, an alloy made from iron, and often other elements in smaller quant.i.ties, depending upon the properties desired, was not made until the middle of the nineteenth century.

The new age of materials created at the molecular level is leading to a historic transformation of the manufacturing process. Just as the Industrial Revolution was launched a quarter of a millennium ago by the marriage of coal-powered energy with machines in order to replace many forms of human labor, nanotechnology promises to launch what many are calling a Third Industrial Revolution based on molecular machines that can rea.s.semble structures made from basic elements to create an entirely new category of products, including: * Carbon nanotubes invested with the ability to store energy and manifest previously unimaginable properties; * Ultrastrong carbon fibers that are already replacing steel in some niche applications; and * Ceramic matrix nanocomposites that are expected to have wide applications in industry.

The emerging Nanotechnology Revolution, which is converging with the multiple revolutions in the life sciences, also has implications in a wide variety of other human endeavors. There are already more than 1,000 nanotechnology products available, most of them cla.s.sified as incremental improvements in already known processes, mostly in the health and fitness category. The use of nanostructures for the enhancement of computer processing, the storage of memory, the identification of toxics in the environment, the filtration and desalination of water, and other uses are still in development.

The reactivity of nanomaterials and their thermal, electrical, and optical properties are among the changes that could have significant commercial impact. For example, the development of graphene-a form of graphite only one atom thick-has created excitement about its unusual interaction with electrons, which opens a variety of useful applications.

Considerable research is under way on potential hazards of nanoparticles. Most experts now minimize the possibility of "self-replicating nan.o.bots," which gave rise to serious concerns and much debate in the first years of the twenty-first century, but other risks-such as the acc.u.mulation of nanoparticles in human beings and the possibility of consequent cell damage-are taken more seriously. According to David Rejeski, director of the Science and Technology Innovation Program at the Woodrow Wilson International Center for Scholars, "We know very little about the health and environmental impacts [of nanomaterials] and virtually nothing about their synergistic impacts."

In a sense, nanoscience has been around at least since the work of Louis Pasteur, and certainly since the discovery of the double helix in 1953. The work of Richard Smalley on buckminsterfullerene molecules ("buckyb.a.l.l.s") in 1985 triggered a renewed surge of interest in the application of nanotechnology to the development of new materials. Six years later, the first carbon nanotubes offered the promise of electrical conductivity exceeding that of copper and the possibility of creating fibers with 100 times the strength and one sixth the weight of steel.

The dividing line between nanotechnology and new materials sciences is partly an arbitrary one. What both have in common is the recent development of new more powerful microscopes, new tools for guiding the manipulation of matter at nanoscales, the development of new more powerful supercomputer programs for modeling and studying new materials at the atomic level, and a continuing stream of new basic research breakthroughs on the specialized properties of nanoscale molecular creations, including quantum properties.

THE RISE OF 3D PRINTING.

Humankind"s new ability to manipulate atoms and molecules is also leading toward the disruptive revolution in manufacturing known as 3D printing. Also known as additive manufacturing, this new process builds objects from a three-dimensional digital file by laying down an ultrathin layer of whatever material or materials the object is to be made of, and then adds each additional ultrathin layer-one by one-until the object is formed in three-dimensional s.p.a.ce. More than one different kind of material can be used. Although this new technology is still early in its development period, the advantages it brings to manufacturing are difficult to overstate. Already, some of the results are startling.

Since 1908, when Henry Ford first used identical interchangeable parts that were fitted together on a moving a.s.sembly line to produce the Model T, manufacturing has been dominated by ma.s.s production. The efficiencies, speed, and cost savings in the process revolutionized industry and commerce. But many experts now predict that the rapid development of 3D printing will change manufacturing as profoundly as ma.s.s production did more than 100 years ago.

The process has actually been used for several decades in a technique known as rapid prototyping-a specialized niche in which manufacturers could produce an initial model of what they would later produce en ma.s.se in more traditional processes. For example, the designs for new aircraft are often prototyped as 3D models for wind tunnel testing. This niche is itself being disrupted by the new 3D printers; one Colorado firm, LGM, that prototypes buildings for architects, has already made dramatic changes. The company"s founder, Charles Overy, told The New York Times, "We used to take two months to build $100,000 models." Instead, he now builds $2,000 models and completes them overnight.

The emerging potential for using 3D printing is illuminating some of the inefficiencies in ma.s.s production: the stockpiling of components and parts, the large amount of working capital required for such stockpiling, the profligate waste of materials, and of course the expense of employing large numbers of people. Enthusiasts also contend that 3D printing often requires only 10 percent of the raw material that is used in the ma.s.s production process, not to mention a small fraction of the energy costs. It continues and accelerates a longer-term trend toward "dematerialization" of manufactured goods-a trend that has already kept the total tonnage of global goods constant over the past half century, even as their value has increased more than threefold.

In addition, the requirement for standardizing the size and shape of products made in ma.s.s production leads to a "one size fits all" approach that is unsatisfactory for many kinds of specialized products. Ma.s.s production also requires the centralization of manufacturing facilities and the consequent transportation costs for delivery of parts to the factory and finished products to distant markets. By contrast, 3D printing offers the promise of transmitting the digital information that embodies the design and blueprint for each product to widely dispersed 3D printers located in all relevant markets.

Neil Hopkinson, senior lecturer in the Additive Manufacturing Research Group at Loughborough University, said, "It could make offsh.o.r.e manufacturing half way round the world far less cost effective than doing it at home, if users can get the part they need printed off just round the corner at a 3D print shop on the high street. Rather than stockpile spare parts and components in locations all over the world, the designs could be costlessly stored in virtual computer warehouses waiting to be printed locally when required."

At its current stage of development, 3D printing focuses on relatively small products, but as the technique is steadily improved, specialized 3D printers for larger parts and products will soon be available. One company based in Los Angeles, Contour Crafting, has already built a huge 3D printer that travels on a tractor-trailer to a construction site and prints an entire house in only twenty hours (doors and windows not included)! In addition, while the 3D printers now available have production runs of one item up to, in some cases, 1,000 items, experts predict that within the next few years these machines will be capable of turning out hundreds of thousands of identical parts and products.

There are many questions yet to be answered about the treatment of intellectual property in a 3D printing era. The three-dimensional design will make up the lion"s share of the value in a 3D printing economy, but copyright and patent law were developed without the antic.i.p.ation of this technology and will have to be modified to account for the new emerging reality. In general, "useful" physical objects often do not have protection against replication under copyright laws.

Although there are skeptics who question how fast this new technology will mature, engineers and technologists in the United States, China, and Europe are working hard to exploit its potential. Its early use in printing prosthetics and other devices with medical applications is gaining momentum rapidly. Inexpensive 3D printers have already found their way into the hobbyist market at prices as low as $1,000. Carl Ba.s.s, the CEO of Autodesk, which has invested in 3D printing, said in 2012, "Some people see it as a niche market. They claim that it can"t possibly scale. But this is a trend, not a fad. Something seismic is going on." Some advocates of more widespread gun ownership are promoting the 3D printing of guns as a way to circ.u.mvent regulations on gun sales. Opponents have expressed concern that any such guns used in crimes could be easily melted down to avoid any effort by law enforcement authorities to use the guns as evidence.

THE WAVE OF automation that is contributing to the outsourcing and robosourcing of jobs from developed countries to emerging and developing markets will soon begin to displace many of the jobs so recently created in those same low-wage countries. 3D printing could accelerate this process, and eventually could also move manufacturing back into developed countries. Many U.S. companies have already reported that various forms of automation have enabled them to bring back at least some of the jobs they had originally outsourced to low-wage countries.

CAPITALISM IN CRISIS.

The emergence of Earth Inc. and its disruption of all three factors of production-labor, capital, and natural resources-has contributed to what many have referred to as a crisis in capitalism. A 2012 Bloomberg Global Poll of business leaders around the world found that 70 percent believe capitalism is "in trouble." Almost one third said it needs a "radical reworking of the rules and regulations"-though U.S. partic.i.p.ants were less willing than their global counterparts to endorse either conclusion.

The inherent advantages of capitalism over any other system for organizing economic activity are well understood. It is far more efficient in allocating resources and matching supply to demand; it is far more effective at creating wealth; and it is far more congruent with higher levels of freedom. Most fundamentally, capitalism unlocks a larger fraction of the human potential with ubiquitous organic incentives that reward effort and innovation. The world"s experimentation with other systems-including the disastrous experiences with communism and fascism in the twentieth century-led to a nearly unanimous consensus at the beginning of the twenty-first century that democratic capitalism was the ideology of choice throughout the world.

And yet publics around the world have been shaken by a series of significant market dislocations over the last two decades, culminating in the Great Recession of 2008 and its lingering aftermath. In addition, the growing inequality in most large economies in the world and the growing concentration of wealth at the top of the income ladder have caused a crisis of confidence in the system of market capitalism as it is presently functioning. The persistent high levels of unemployment and underemployment in industrial countries, added to unusually high levels of public and private indebtedness, have also diminished confidence that the economic policy toolkit now being used can produce a recovery that is strong enough to restore adequate vitality.

As n.o.bel Prizewinning economist Joseph Stiglitz put it in 2012: It is no accident that the periods in which the broadest cross sections of Americans have reported higher net incomes-when inequality has been reduced, partly as a result of progressive taxation-have been the periods in which the U.S. economy has grown the fastest. It is likewise no accident that the current recession, like the great Depression, was preceded by large increases in inequality. When too much money is concentrated at the top of society, spending by the average American is necessarily reduced-or at least it will be in the absence of an artificial prop. Moving money from the bottom to the top lowers consumption because higher-income individuals consume, as a fraction of their income, less than lower-income individuals do.

While developing and emerging economies are seeing increases in productivity, jobs, incomes, and output, inequality within these countries is also increasing. And of course, many of them still have significant numbers of people experiencing extreme poverty and deprivation. More than one billion people in the world still live on less than $2 a day, and almost 900 million of them still live in "extreme poverty"-defined as having an income less than $1.25 per day.

Most important of all, among the failures in the way the global market system is operating today is its almost complete refusal to include any recognition of major externalities, starting with its failure to take into account the cost and consequences of the 90 million tons of global warming pollution spewed every twenty-four hours into the planet"s atmosphere. The problem of externalities in market theory is well known but has never been so acute as now. Positive externalities are also routinely ignored, leading to chronic underinvestment in education, health care, and other public goods.

In many countries, including the United States, the growing concentration of wealth in the hands of the top one percent has also led to distortions in the political system that now limit the ability of governments to consider policy changes that might benefit the many at the (at least short-term) expense of the few. Governments have been effectively paralyzed and incapable of taking needed action. This too has undermined public confidence in the way market capitalism is currently operating.

With the tightly coupled and increasingly ma.s.sive flows of capital through the global economy, all governments now feel that they are hostage to the perceptions within the global market for capital. There are numerous examples-Greece, Ireland, Italy, Portugal, and Spain, to name a few-of countries" confronting policy choices that appear to be mandated by the perceptions of the global marketplace, not by the democratically expressed will of the citizens in those countries. Many have come to the conclusion that the only policies that will prove to be effective in restoring human influence over the shape of our economic future will be ones that address the new global economic reality on a global basis.

SUSTAINABLE CAPITALISM.

Along with my partner and cofounder of Generation Investment Management, David Blood, I have advocated a set of structural remedies that would promote what we call Sustainable Capitalism. One of the best-known problems is the dominance of short-term perspectives and the obsession with short-term profits, often at the expense of the buildup of long-term value. Forty years ago, the average holding period for stocks in the United States was almost seven years. That made sense because roughly three quarters of the real value in the average business builds up over a business cycle and a half, roughly seven years. Today, however, the average holding period for stocks is less than seven months.

There are many reasons for the increasing reliance on short-term thinking by investors. These pressures are accentuated by the larger trends in the transformed and now interconnected global economy. As one a.n.a.lyst noted in 2012, "our banks, hedge funds and venture capitalists are geared toward investing in financial instruments and software companies. In such endeavors, even modest investments can yield extraordinarily quick and large returns. Financing brick-and-mortar factories, by contrast, is expensive and painstaking and offers far less potential for speedy returns."

This short-term perspective on the part of investors puts pressure on CEOs to adopt similarly short-term perspectives. For example, a premier business research firm in the United States (BNA) conducted a survey of CEOs and CFOs a few years ago in which it asked, among other things, a hypothetical question: You have the opportunity to make an investment in your company that will make the firm more profitable and more sustainable, but if you do so, you will slightly miss your next quarterly earnings report; under these circ.u.mstances, will you make the investment? Eighty percent said no.

A second well-known problem in the way capitalism currently operates is the widespread misalignment of incentives. The compensation of most investment managers-the people that make most of the daily decisions on the investment of capital-is calculated on a quarterly, or at most annual, basis. Similarly, many executives running companies are compensated in ways that reward short-term results. Instead, compensation should be aligned temporally with the period over which the maximum value of firms can be increased, and should be aligned with the fundamental drivers of long-term value.

In addition, companies should be encouraged to abandon the default practice of providing quarterly earnings guidance. These short-term metrics capture so much attention that they end up heavily penalizing firms that try to build sustainable value, and fail to take into account the usefulness of investments that pay for themselves handsomely over longer periods of time.

THE CHANGING NATURE OF WORK.

One thing is certain: the transformation of the global economy and the emergence of Earth Inc. will require an entirely new approach to policy in order to reclaim humanity"s role in shaping our own future. What we are now going through bears little relation to the problems inherent in the business cycle or the kinds of temporary market disruptions to which global business has become accustomed. The changes brought about by the emergence of Earth Inc. are truly global, truly historic, and are still accelerating.

Although the current changes are unprecedented in speed and scale, the pattern of productive activity for the majority of human beings has of course undergone several ma.s.sive changes throughout the span of human history. Most notably, the Agricultural and Industrial revolutions both led to dramatic changes in the way the majority of people in the world spent their days.

The first known man-made tools, including spear points and axes, were a.s.sociated with a hunting and gathering pattern that lasted, according to anthropologists, almost 200 millennia. The displacement of that dominant pattern by a new one based on agriculture (beginning not long after the last Ice Age receded) took less than eight millennia, while the Industrial Revolution required less than 150 years to reduce the percentage of agricultural jobs in the United States from 90 to 2 percent of the workforce. Even when societies still based on subsistence agriculture are included in the global calculation, less than half of all jobs worldwide are now on farms.

The plow and the steam engine-along with the complex universe of tools and technologies that accompanied the Agricultural and Industrial revolutions respectively-undermined the value of skills and expertise that had long been relied upon to connect the meaning of people"s lives to the provision of subsistence and material gains for themselves, their families, and communities. Nevertheless, in both cases, the disappearance of old patterns was accompanied by the emergence of new ones that, on balance, made life easier and retained the link between productive activity and the meeting of real needs.

To be sure, the transformation of work opportunities required large changes in social patterns, including ma.s.s internal migrations from rural areas to cities, and the geographic separation of homes and workplaces, to mention only two of the most prominent disruptions. But the net result was still consistent with the hopeful narrative of progress and was accompanied by economic growth that increased net incomes dramatically and sharply reduced the amount of work necessary to meet basic human needs: food, clothing, shelter, and the like. In both cases, formerly common pursuits became obsolete while new ones emerged that called for new skills and a reconception of what it meant to be productive.

Both of these ma.s.sive transformations occurred over long periods of time covering multiple generations. In both revolutions, new technologies opened up new opportunities for reorganizing the human enterprise into a new dominant pattern that was in each case disruptive and, for many, disorienting-but produced ma.s.sive increases in productivity, large increases in the number of jobs, higher average incomes, less poverty, and historic improvements in the quality of life for most people.

Consider again the larger pattern traced in the history of these three epochs: the first lasted 200,000 years, the next lasted 8,000 years, and the Industrial Revolution took only 150 years. Each of these historic changes in the nature of the human experience was more significant than its predecessor and occurred over a radically shorter time span. All were connected to technological innovations.

Taken together, they trace the long gestation, infancy, and slow development of a technology revolution that eventually grew to play a central role in the advance of human civilization-then gradually but steadily gained speed and momentum in each of the last four centuries, jolted into a higher gear, and began to accelerate at an ever faster rate until it seemed to take on a life of its own. It is now carrying us with it at a speed beyond our imagining toward ever newer technologically shaped realities that often appear, in the words of Arthur C. Clarke, "indistinguishable from magic."

Because the change under way is one not only of degree but of kind, we are largely unprepared for what"s happening. The structure of our brains is not very different from that of our ancestors 200,000 years ago. Because of the radical changes induced by technology in the way we live our lives, however, we are forced to consider making adaptations in the design of our civilization more rapidly than seems possible or even plausible.

We have difficulty even perceiving and thinking clearly about the pace of change with which we are now confronted. Most of us struggle with the practical meaning of exponential change-that is, change that is not only increasing but is increasing at a steadily faster pace. Consider the basic shape of all exponential curves. The pattern of change measured by such curves is slow at first, and then ascends at a gradually but ever increasing rate as the angle of ascent steepens. The steep phase of the curve drives changes at a far more rapid rate than the flat part of the curve-and it is this phase that has consequences not only of degree but of kind. As explained by Moore"s Law, the fourth-generation iPad now has more computing power than the most powerful supercomputer in the world thirty years ago, the Cray-2.

The implications of this new period of hyper-change are not just mathematical or theoretical. They are transforming the fundamental link between how we play a productive role in life and how we meet our needs. What people do-their work, their careers, their opportunities to exchange productive activity for income to meet essential human needs and provide a sense of well-being, security, honor, dignity, and a sense of belonging as a member of the community: this basic exchange at the center of our lives is now changing on a global scale and at a speed with no precedent in human history.

In modern societies we have long since used money and other tangible symbols of credit and debit as the princ.i.p.al means of measuring and keeping track of this ongoing series of exchanges. But even in older forms of society where money was not the medium of exchange, productive work also was connected to the ability to meet one"s needs, with a tacit recognition by the community of those who contributed to the needs of the group, and whose needs were then met partly by others in the group. It is that basic connection at the heart of human societies that is beginning to be radically transformed.

Many economists comfort themselves with the idea that this is actually an old and continuing story that they know and understand well-a story that has generated unnecessary alarm since Ned Ludd, a weaver, smashed the new knitting frames invented in the late eighteenth century, which he realized were making the jobs of weavers obsolete. The "Luddite fallacy"-a phrase coined to describe the mistaken belief that new technologies will result in a net reduction of good jobs for people-was validated on a large scale when the mechanization of farming eliminated all but a tiny fraction of farm-related jobs, and yet the new jobs that emerged in factories not only outnumbered those lost on farms but produced higher incomes, even as farms became far more productive and food prices sharply declined. Until recently, the large-scale automation of industry seemed to be repeating the same pattern again: routine, repet.i.tive, and often arduous jobs were eliminated, while better jobs with higher wages more than replaced those that were eliminated.

Yet what we believe we learned during the early stages of this technology revolution may no longer be relevant to the new hyper-accelerated pace of change. The introduction of networked machine intelligence-and now artificial intelligence-may soon put a much higher percentage of employment opportunities at risk in ever larger sectors of the global economy. In order to adapt to this new emergent reality we may soon have to reimagine the way we as human beings exchange our productive potential for the income necessary to meet our needs.

Many scholars who have specialized in the study of technology"s interaction with the pattern of society, including Marshall McLuhan, have described important new technologies as "extensions" of basic human capacities. The automobile, in the terms of this metaphor, is an extension of our capacity for locomotion. The telegraph, radio, and television are, in the same way, described as extensions of our ability to speak with one another over a greater distance. Both the shovel and the steam shovel are extensions of our hands and our ability to grasp physical objects. New technologies such as these made some jobs obsolete, but on balance created more new ones-often because the new technologically enhanced capacities had to be operated or used by people who could think clearly enough to be trained to use them effectively and safely.

In this context, the emergence of new and powerful forms of artificial intelligence represents not just the extension of yet another human capacity, but an extension of the dominant and uniquely human capacity to think. Though science has established that we are not the only sentient living creatures, it is nevertheless abundantly obvious that we as a species have become dominant on Earth because of our capacity to make mental models of the world around us and manipulate those models through thought to gain the power to transform our surroundings and exert dominion over the planet. The technological extension of the ability to think is therefore different in a fundamental way from any other technological extension of human capacity.

As artificial intelligence matures and is connected with all the other technological extensions of human capacity-grasping and manipulating physical objects, recombining them into new forms, carrying them over distance, communicating with one another utilizing flows of information of far greater volume and far greater speed than any humans are capable of achieving, making their own abstract models of reality, and learning in ways that are sometimes superior to the human capacity to learn-the impact of the AI revolution will be far greater than that of any previous technological revolution.

One of the impacts will be to further accelerate the decoupling of gains in productivity from gains in the standard of living for the middle cla.s.s. In the past, improvements in economic efficiency have generally led to improvements in wages for the majority, but when the subst.i.tution of technology capital for labor creates the elimination of very large numbers of jobs, a much larger proportion of the gains go to those who provide the capital. The fundamental relationship between technology and employment is being transformed.

This trend is now nearing a threshold beyond which so many jobs are lost that the level of consumer demand falls below the level necessary to sustain healthy economic growth. In a new study of the Great Depression, Joseph Stiglitz has argued that the ma.s.sive loss of jobs in agriculture that accompanied the mechanization of farming led to a similar contraction of demand that was actually a much larger factor in causing the Depression than has been previously recognized-and that we may be poised for another wrenching transition with the present ongoing loss of manufacturing jobs.

New jobs can and must be created, and one of the obvious targets for new employment is the provision of public goods in order to replace the income lost by those whose employment is being robosourced and outsourced. But elites who have benefited from the emergence of Earth Inc. have thus far effectively used their acc.u.mulated wealth and political influence to block any shift of jobs to the public sector. The good news is that even though the Internet has facilitated both outsourcing and robosourcing, it is also providing a new means to build new forms of political influence not controlled by elites. This is a major focus of the next chapter.

* This term was first coined by Buckminster Fuller in 1973, but he used it to convey a completely different meaning.

For a larger version of the following image, click here.

2.

THE GLOBAL MIND.

JUST AS THE SIMULTANEOUS OUTSOURCING AND ROBOSOURCING OF PRODUCTIVE activity has led to the emergence of Earth Inc., the simultaneous deployment of the Internet and ubiquitous computing power have created a planet-wide extension of the human nervous system that transmits information, thoughts, and feelings to and from billions of people at the speed of light.

We are connecting to vast global data networks-and to one another-through email, text messaging, social networks, multiplayer games, and other digital forms of communication at an unprecedented pace. This revolutionary and still accelerating shift in global communication is driving a tsunami of change forcing disruptive-and creative-modifications in activities ranging from art to science and from collective political decision making to building businesses.

Some familiar businesses are struggling to survive: newspapers, travel agencies, bookstores, music, video rental, and photography stores are among the most frequently recognized early examples of businesses confronted with a technologically driven mandate to either radically change or disappear. Some large inst.i.tutions are also struggling: national postal services are hemorrhaging customers as digital communication displaces letter writing, leaving the venerable post office to serve primarily as a distribution service for advertis.e.m.e.nts and junk mail.

At the same time, we are witnessing the explosive growth of new business models, social organizations, and patterns of behavior that would have been unimaginable before the Internet and computing: from Facebook and Twitter to Amazon and iTunes, from eBay and Google to Baidu, Yandex.ru, and Globo.com, to a dozen other businesses that have started since you began reading this sentence-all are phenomena driven by the connection of two billion people (thus far) to the Internet. In addition to people, the number of digital devices connected to other devices and machines-with no human being involved-already exceeds the population of the Earth. Studies project that by 2020, more than 50 billion devices will be connected to the Internet and exchanging information on a continuous basis. When less sophisticated devices like Radiofrequency Identification (RFID) tags capable of transmitting information wirelessly or transferring data to devices that read them are included, the number of "connected things" is already much larger. (Some school systems, incidentally, have begun to require students to wear identification tags equipped with RFID tags in an effort to combat truancy, generating protests from many students.) TECHNOLOGY AND THE "WORLD BRAIN"

Writers have used the human nervous system to describe electronic communication since the invention of the telegraph. In 1851, only six years after Samuel Morse received the message "What hath G.o.d wrought?" Nathaniel Hawthorne wrote: "By means of electricity, the world of matter has become a great nerve vibrating thousands of miles in a breathless point of time. The round globe is a vast brain, instinct with intelligence." Less than a century later, H. G. Wells modified Hawthorne"s metaphor when he offered a proposal to develop a "world brain"-which he described as a commonwealth of all the world"s information, accessible to all the world"s people as "a sort of mental clearinghouse for the mind: a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared." In the way Wells used the phrase "world brain," what began as a metaphor is now a reality. You can look it up right now on Wikipedia or search the World Wide Web on Google for some of the estimated one trillion web pages.

Since the nervous system connects to the human brain and the brain gives rise to the mind, it was understandable that one of the twentieth century"s greatest theologians, Teilhard de Chardin, would modify Hawthorne"s metaphor yet again. In the 1950s, he envisioned the "planetization" of consciousness within a technologically enabled network of human thoughts that he termed the "Global Mind." And while the current reality may not yet match Teilhard"s expansive meaning when he used that provocative image, some technologists believe that what is emerging may nevertheless mark the beginning of an entirely new era. To paraphrase Descartes, "It thinks; therefore it is."*

The supercomputers and software in use have all been designed by human beings, but as Marshall McLuhan once said, "We shape our tools, and thereafter, our tools shape us." Since the global Internet and the billions of intelligent devices and machines connected to it-the Global Mind-represent what is arguably far and away the most powerful tool that human beings have ever used, it should not be surprising that it is beginning to reshape the way we think in ways both trivial and profound-but sweeping and ubiquitous.

In the same way that multinational corporations have become far more efficient and productive by outsourcing work to other countries and robosourcing work to intelligent, interconnected machines, we as individuals are becoming far more efficient and productive by instantly connecting our thoughts to computers, servers, and databases all over the world. Just as radical changes in the global economy have been driven by a positive feedback loop between outsourcing and robosourcing, the spread of computing power and the increasing number of people connected to the Internet are mutually reinforcing trends. Just as Earth Inc. is changing the role of human beings in the production process, the Global Mind is changing our relationship to the world of information.

The change being driven by the wholesale adoption of the Internet as the princ.i.p.al means of information exchange is simultaneously disruptive and creative. The futurist Kevin Kelly says that our new technological world-infused with intelligence-more and more resembles "a very complex organism that often follows its own urges." In this case, the large complex system includes not only the Internet and the computers, but also us.

Consider the impact on conversations. Many of us now routinely reach for smartphones to find the answers to questions that arise at the dinner table by searching the Internet with our fingertips. Indeed, many now spend so much time on their smartphones and other mobile Internet-connected devices that oral conversation sometimes almost ceases. As a distinguished philosopher of the Internet, Sherry Turkle, recently wrote, we are spending more and more time "alone together."

The deeply engaging and immersive nature of online technologies has led many to ask whether their use might be addictive for some people. The Diagnostic and Statistical Manual of Mental Disorders (DSM), when it is updated in May 2013, will include "Internet Use Disorder" in its appendix for the first time, as a category targeted for further study. There are an estimated 500 million people in the world now playing online games at least one hour per day. In the United States, the average person under the age of twenty-one now spends almost as much time playing online games as they spend in cla.s.srooms from the sixth through twelfth grades. And it"s not just young people: the average online social games player is a woman in her mid-forties. An estimated 55 percent of those playing social games in the U.S.-and 60 percent in the U.K.-are women. (Worldwide, women also generate 60 percent of the comments and post 70 percent of the pictures on Facebook.) OF MEMORY, "MARKS," AND THE GUTENBERG EFFECT Although these changes in behavior may seem trivial, the larger trend they ill.u.s.trate is anything but. One of the most interesting debates among experts who study the relationship between people and the Internet is over how we may be adapting the internal organization of our brains-and the nature of consciousness-to the amount of time we are spending online.

Human memory has always been affected by each new advance in communications technology. Psychological studies have shown that when people are asked to remember a list of facts, those told in advance that the facts will later be retrievable on the Internet are not able to remember the list as well as a control group not informed that the facts could be found online. Similar studies have shown that regular users of GPS devices began to lose some of their innate sense of direction.

The implication is that many of us use the Internet-and the devices, programs, and databases connected to it-as an extension of our brains. This is not a metaphor; the studies indicate that it is a literal real-location of mental energy. In a way, it makes sense to conserve our brain capacity by storing only the meager data that will allow us to retrieve facts from an external storage device. Or at least Albert Einstein thought so, once remarking: "Never memorize what you can look up in books."

For half a century neuroscientists have known that specific neuronal pathways grow and proliferate when used, while the disuse of neuron "trees" leads to their shrinkage and gradual loss of efficacy. Even before those discoveries, McLuhan described the process metaphorically, writing that when we adapt to a new tool that extends a function previously performed by the mind alone, we gradually lose touch with our former capacity because a "built-in numbing apparatus" subtly anesthetizes us to accommodate the attachment of a mental prosthetic connecting our brains seamlessly to the enhanced capacity inherent in the new tool.

In Plato"s dialogues, when the Egyptian G.o.d Theuth tells one of the kings of Egypt, Thamus, that the new communications technology of the age-writing-would allow people to remember much more than previously, the king disagreed, saying, "It will implant forgetfulness in their souls: they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."

So this dynamic is hardly new. What is profoundly different about the combination of Internet access and mobile personal computing devices is that the instantaneous connection between an individual"s brain and the digital universe is so easy that a habitual reliance on external memory (or "exomemory") can become an extremely common behavior. The more common this behavior becomes, the greater one comes to rely on exomemory-and the less one relies on memories stored in the brain itself. What becomes more important instead are the "external marks" referred to by Thamus 2,400 years ago. Indeed, one of the new measures of practical intelligence in the twenty-first century is the ease with which someone can quickly locate relevant information on the Internet.

Human consciousness has always been shaped by external creations. What makes human beings unique among, and dominant over, life-forms on Earth is our capacity for complex and abstract thought. Since the emergence of the neocortex in roughly its modern form around 200,000 years ago, however, the trajectory of human dominion over the Earth has been defined less by further developments in human physical evolution and more by the evolution of our relationship to the tools we have used to augment our leverage over reality.

Scientists disagree over whether the use of complex speech by humans emerged rather suddenly with a genetic mutation or whether it developed more gradually. But whatever its origin, complex speech radically changed the ability of humans to use information in gaining mastery over their circ.u.mstances by enabling us for the first time to communicate more intricate thoughts from one person to others. It also arguably represented the first example of the storing of information outside the human brain. And for most of human history, the spoken word was the princ.i.p.al "information technology" used in human societies.

The long hunter-gatherer period is a.s.sociated with oral communication. The first use of written language is a.s.sociated with the early stages of the Agricultural Revolution. The progressive development and use of more sophisticated tools for written language-from stone tablets to papyrus to velum to paper, from pictograms to hieroglyphics to phonetic alphabets-is a.s.sociated with the emergence of complex civilizations in Mesopotamia, Egypt, China and India, the Mediterranean, and Central America.

The perfection by the ancient Greeks of the alphabet first devised by the Phoenicians led to a new way of thinking that explains the sudden explosion in Athens during the fourth and fifth centuries BCE of philosophical discourse, dramatic theater, and the emergence of sophisticated concepts like democracy. Compared to hieroglyphics, pictographs, and cuneiform, the abstract shapes that made up the Greek alphabet-like those that make up all modern Western alphabets-have no more inherent meaning in themselves than the ones and zeros of digital code. But when they are arranged and rearranged in different combinations, they can be a.s.signed gestalt meanings. The internal organization of the brain necessary to adapt to this new communications tool has been a.s.sociated with the distinctive difference historians find in the civilization of ancient Greece compared to all of its predecessors.

The use of this new form of written communication led to an increased ability to store the collective wisdom of prior generations in a form that was external to the brain but nonetheless accessible. Later advances-particularly the introduction of the printing press in the fourteenth century (in Asia) and the fifteenth century (in Europe)-were also a.s.sociated with a further expansion of the amount of knowledge stored externally and a further increase in the ease with which a much larger percentage of the population could gain access to it. With the introduction of print, the exponential curve that measures the complexity of human civilization suddenly bent upward at a sharply steeper angle. Our societies changed; our culture changed; our commerce changed; our politics changed.

Prior to the emergence of what McLuhan described as the Gutenberg Galaxy, most Europeans were illiterate. Their relative powerlessness was driven by their ignorance. Most libraries consisted of a few dozen hand-copied books, sometimes chained to the desks, written in a language that for the most part only the monks could understand. Access to the knowledge contained in these libraries was effectively restricted to the ruling elites in the feudal system, which wielded power in league with the medieval church, often by force of arms. The ability conferred by the printing press to capture, replicate, and distribute en ma.s.se the collected wisdom of preceding ages touched off the plethora of advances in information sharing that led to the modern world.

Less than two generations after Gutenberg"s press came the Voyages of Discovery. When Columbus returned from the Bahamas, eleven print editions of the account of his journey captivated Europe. Within a quarter century sailing ships had circ.u.mnavigated the globe, bringing artifacts and knowledge from North, South, and Central America, Asia, and previously unknown parts of Africa.

In that same quarter century, the ma.s.s distribution of the Christian Bible in German and then other popular languages led to the Protestant Reformation (which was also fueled by Martin Luther"s moral outrage over the print-empowered bubble in the market for indulgences, including the exciting new derivatives product: indulgences for sins yet to be committed). Luther"s Ninety-Five Theses, nailed to the door of the church in Wittenberg in 1517, were written in Latin, but thousands of copies distributed to the public were printed in German. Within a decade, more than six million copies of various Reformation pamphlets had been printed, more than a quarter of them written by Luther himself.

The proliferation of texts in languages spoken by the average person triggered a series of ma.s.s adaptations to the new flow of information, beginning a wave of literacy that began in Northern Europe and moved southward. In France, as the wave began to crest, the printing press was denounced as "the work of the Devil." But as popular appet.i.tes grew for the seemingly limitless information that could be conveyed in the printed word, the ancient wisdom of the Greeks and Romans became accessible. The resulting explosion of thought and communication stimulated the emergence of a new way of thinking about the legacy of the past and the possibilities of the future.

The ma.s.s distribution of knowledge about the world of the present began to shake the foundations of the feudal order. The modern world that is now being transformed by kind rather than degree rose out of the ruins of the civilization that we might say was creatively destroyed by the printing press. The Scientific Revolution began less than a hundred years after Gutenberg"s Bible, with the publication of Nicolaus Copernicus"s Revolution of the Spheres (a copy of which he received fresh from the printer on his deathbed). Less than a century later Galileo confirmed heliocentrism. A few years after that came Descartes"s "Clockwork Universe." And the race was on.

Challenges to the primacy of the medieval church and the feudal lords became challenges to the absolute rule of monarchs. Merchants and farmers began to ask why they could not exercise some form of self-determination based on the knowledge now available to them. A virtual "public square" emerged, within which ideas were exchanged by individuals. The Agora of ancient Athens and the Forum of the Roman Republic were physical places where the exchange of ideas took place, but the larger virtual forum created by the printing press mimicked important features of its predecessors in the ancient world.

© 2024 www.topnovel.cc