Skip to content

Technology Is A Major Problem In Society Argumentative Essay On Death

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson’s contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

It’s a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. ­Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. “It’s the great paradox of our era,” he says. “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren’t keeping up.”

Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.

Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world’s manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts—tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics’ Baxter (see “The Blue-Collar Robot,” May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google’s driverless car suggest what automation might be able to accomplish someday soon.

A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.

It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, “digital versions of human intelligence” are increasingly replacing even those jobs once thought to require people. “It will change every profession in ways we have barely seen yet,” he warns.

McAfee, associate director of the MIT Center for Digital Business at the Sloan School of Management, speaks rapidly and with a certain awe as he describes advances such as Google’s driverless car. Still, despite his obvious enthusiasm for the technologies, he doesn’t see the recently vanished jobs coming back. The pressure on employment and the resulting inequality will only get worse, he suggests, as digital technologies—fueled with “enough computing power, data, and geeks”—continue their exponential advances over the next several decades. “I would like to be wrong,” he says, “but when all these science-fiction technologies are deployed, what will we need all the people for?”

New Economy?

But are these new technologies really responsible for a decade of lackluster job growth? Many labor economists say the data are, at best, far from conclusive. Several other plausible explanations, including events related to global trade and the financial crises of the early and late 2000s, could account for the relative slowness of job creation since the turn of the century. “No one really knows,” says Richard Freeman, a labor economist at Harvard University. That’s because it’s very difficult to “extricate” the effects of technology from other macroeconomic effects, he says. But he’s skeptical that technology would change a wide range of business sectors fast enough to explain recent job numbers.

Employment trends have polarized the workforce and hollowed out the middle class.

David Autor, an economist at MIT who has extensively studied the connections between jobs and technology, also doubts that technology could account for such an abrupt change in total employment. “There was a great sag in employment beginning in 2000. Something did change,” he says. “But no one knows the cause.” Moreover, he doubts that productivity has, in fact, risen robustly in the United States in the past decade (economists can disagree about that statistic because there are different ways of measuring and weighing economic inputs and outputs). If he’s right, it raises the possibility that poor job growth could be simply a result of a sluggish economy. The sudden slowdown in job creation “is a big puzzle,” he says, “but there’s not a lot of evidence it’s linked to computers.”

To be sure, Autor says, computer technologies are changing the types of jobs available, and those changes “are not always for the good.” At least since the 1980s, he says, computers have increasingly taken over such tasks as bookkeeping, clerical work, and repetitive production jobs in manufacturing—all of which typically provided middle-class pay. At the same time, higher-paying jobs requiring creativity and problem-solving skills, often aided by computers, have proliferated. So have low-skill jobs: demand has increased for restaurant workers, janitors, home health aides, and others doing service work that is nearly impossible to automate. The result, says Autor, has been a “polarization” of the workforce and a “hollowing out” of the middle class—something that has been happening in numerous industrialized countries for the last several decades. But “that is very different from saying technology is affecting the total number of jobs,” he adds. “Jobs can change a lot without there being huge changes in employment rates.”

What’s more, even if today’s digital technologies are holding down job creation, history suggests that it is most likely a temporary, albeit painful, shock; as workers adjust their skills and entrepreneurs create opportunities based on the new technologies, the number of jobs will rebound. That, at least, has always been the pattern. The question, then, is whether today’s computing technologies will be different, creating long-term involuntary unemployment.

At least since the Industrial Revolution began in the 1700s, improvements in technology have changed the nature of work and destroyed some types of jobs in the process. In 1900, 41 percent of Americans worked in agriculture; by 2000, it was only 2 percent. Likewise, the proportion of Americans employed in manufacturing has dropped from 30 percent in the post–World War II years to around 10 percent today—partly because of increasing automation, especially during the 1980s.

While such changes can be painful for workers whose skills no longer match the needs of employers, Lawrence Katz, a Harvard economist, says that no historical pattern shows these shifts leading to a net decrease in jobs over an extended period. Katz has done extensive research on how technological advances have affected jobs over the last few centuries—describing, for example, how highly skilled artisans in the mid-19th century were displaced by lower-skilled workers in factories. While it can take decades for workers to acquire the expertise needed for new types of employment, he says, “we never have run out of jobs. There is no long-term trend of eliminating work for people. Over the long term, employment rates are fairly stable. People have always been able to create new jobs. People come up with new things to do.”

Still, Katz doesn’t dismiss the notion that there is something different about today’s digital technologies—something that could affect an even broader range of work. The question, he says, is whether economic history will serve as a useful guide. Will the job disruptions caused by technology be temporary as the workforce adapts, or will we see a science-fiction scenario in which automated processes and robots with superhuman skills take over a broad swath of human tasks? Though Katz expects the historical pattern to hold, it is “genuinely a question,” he says. “If technology disrupts enough, who knows what will happen?”

Dr. Watson

To get some insight into Katz’s question, it is worth looking at how today’s most advanced technologies are being deployed in industry. Though these technologies have undoubtedly taken over some human jobs, finding evidence of workers being displaced by machines on a large scale is not all that easy. One reason it is difficult to pinpoint the net impact on jobs is that automation is often used to make human workers more efficient, not necessarily to replace them. Rising productivity means businesses can do the same work with fewer employees, but it can also enable the businesses to expand production with their existing workers, and even to enter new markets.

Take the bright-orange Kiva robot, a boon to fledgling e-commerce companies. Created and sold by Kiva Systems, a startup that was founded in 2002 and bought by Amazon for $775 million in 2012, the robots are designed to scurry across large warehouses, fetching racks of ordered goods and delivering the products to humans who package the orders. In Kiva’s large demonstration warehouse and assembly facility at its headquarters outside Boston, fleets of robots move about with seemingly endless energy: some newly assembled machines perform tests to prove they’re ready to be shipped to customers around the world, while others wait to demonstrate to a visitor how they can almost instantly respond to an electronic order and bring the desired product to a worker’s station.

A warehouse equipped with Kiva robots can handle up to four times as many orders as a similar unautomated warehouse, where workers might spend as much as 70 percent of their time walking about to retrieve goods. (Coincidentally or not, Amazon bought Kiva soon after a press report revealed that workers at one of the retailer’s giant warehouses often walked more than 10 miles a day.)

Despite the labor-saving potential of the robots, Mick Mountz, Kiva’s founder and CEO, says he doubts the machines have put many people out of work or will do so in the future. For one thing, he says, most of Kiva’s customers are e-commerce retailers, some of them growing so rapidly they can’t hire people fast enough. By making distribution operations cheaper and more efficient, the robotic technology has helped many of these retailers survive and even expand. Before founding Kiva, Mountz worked at Webvan, an online grocery delivery company that was one of the 1990s dot-com era’s most infamous flameouts. He likes to show the numbers demonstrating that Webvan was doomed from the start; a $100 order cost the company $120 to ship. Mountz’s point is clear: something as mundane as the cost of materials handling can consign a new business to an early death. Automation can solve that problem.

Meanwhile, Kiva itself is hiring. Orange balloons—the same color as the robots—hover over multiple cubicles in its sprawling office, signaling that the occupants arrived within the last month. Most of these new employees are software engineers: while the robots are the company’s poster boys, its lesser-known innovations lie in the complex algorithms that guide the robots’ movements and determine where in the warehouse products are stored. These algorithms help make the system adaptable. It can learn, for example, that a certain product is seldom ordered, so it should be stored in a remote area.

Though advances like these suggest how some aspects of work could be subject to automation, they also illustrate that humans still excel at certain tasks—for example, packaging various items together. Many of the traditional problems in robotics—such as how to teach a machine to recognize an object as, say, a chair—remain largely intractable and are especially difficult to solve when the robots are free to move about a relatively unstructured environment like a factory or office.

Techniques using vast amounts of computational power have gone a long way toward helping robots understand their surroundings, but John Leonard, a professor of engineering at MIT and a member of its Computer Science and Artificial Intelligence Laboratory (CSAIL), says many familiar difficulties remain. “Part of me sees accelerating progress; the other part of me sees the same old problems,” he says. “I see how hard it is to do anything with robots. The big challenge is uncertainty.” In other words, people are still far better at dealing with changes in their environment and reacting to unexpected events.

For that reason, Leonard says, it is easier to see how robots could work with humans than on their own in many applications. “People and robots working together can happen much more quickly than robots simply replacing humans,” he says. “That’s not going to happen in my lifetime at a massive scale. The semiautonomous taxi will still have a driver.”

One of the friendlier, more flexible robots meant to work with humans is Rethink’s Baxter. The creation of Rodney Brooks, the company’s founder, Baxter needs minimal training to perform simple tasks like picking up objects and moving them to a box. It’s meant for use in relatively small manufacturing facilities where conventional industrial robots would cost too much and pose too much danger to workers. The idea, says Brooks, is to have the robots take care of dull, repetitive jobs that no one wants to do.

It’s hard not to instantly like Baxter, in part because it seems so eager to please. The “eyebrows” on its display rise quizzically when it’s puzzled; its arms submissively and gently retreat when bumped. Asked about the claim that such advanced industrial robots could eliminate jobs, Brooks answers simply that he doesn’t see it that way. Robots, he says, can be to factory workers as electric drills are to construction workers: “It makes them more productive and efficient, but it doesn’t take jobs.”

The machines created at Kiva and Rethink have been cleverly designed and built to work with people, taking over the tasks that the humans often don’t want to do or aren’t especially good at. They are specifically designed to enhance these workers’ productivity. And it’s hard to see how even these increasingly sophisticated robots will replace humans in most manufacturing and industrial jobs anytime soon. But clerical and some professional jobs could be more vulnerable. That’s because the marriage of artificial intelligence and big data is beginning to give machines a more humanlike ability to reason and to solve many new types of problems.

Even if the economy is only going through a transition, it is an extremely painful one for many.

In the tony northern suburbs of New York City, IBM Research is pushing super-smart computing into the realms of such professions as medicine, finance, and customer service. IBM’s efforts have resulted in Watson, a computer system best known for beating human champions on the game show Jeopardy! in 2011. That version of Watson now sits in a corner of a large data center at the research facility in Yorktown Heights, marked with a glowing plaque commemorating its glory days. Meanwhile, researchers there are already testing new generations of Watson in medicine, where the technology could help physicians diagnose diseases like cancer, evaluate patients, and prescribe treatments.

IBM likes to call it cognitive computing. Essentially, Watson uses artificial-­intelligence techniques, advanced natural-language processing and analytics, and massive amounts of data drawn from sources specific to a given application (in the case of health care, that means medical journals, textbooks, and information collected from the physicians or hospitals using the system). Thanks to these innovative techniques and huge amounts of computing power, it can quickly come up with “advice”—for example, the most recent and relevant information to guide a doctor’s diagnosis and treatment decisions.

Despite the system’s remarkable ability to make sense of all that data, it’s still early days for Dr. Watson. While it has rudimentary abilities to “learn” from specific patterns and evaluate different possibilities, it is far from having the type of judgment and intuition a physician often needs. But IBM has also announced it will begin selling Watson’s services to customer-support call centers, which rarely require human judgment that’s quite so sophisticated. IBM says companies will rent an updated version of Watson for use as a “customer service agent” that responds to questions from consumers; it has already signed on several banks. Automation is nothing new in call centers, of course, but Watson’s improved capacity for natural-language processing and its ability to tap into a large amount of data suggest that this system could speak plainly with callers, offering them specific advice on even technical and complex questions. It’s easy to see it replacing many human holdouts in its new field.

Digital Losers

The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades. Digital technologies tend to favor “superstars,” they point out. For example, someone who creates a computer program to automate tax preparation might earn millions or billions of dollars while eliminating the need for countless accountants.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected. “The middle seems to be going away,” he adds. “The top and bottom are clearly getting farther apart.” While technology might be only one factor, says McAfee, it has been an “underappreciated” one, and it is likely to become increasingly significant.

Not everyone agrees with Brynjolfsson and McAfee’s conclusions—particularly the contention that the impact of recent technological change could be different from anything seen before. But it’s hard to ignore their warning that technology is widening the income gap between the tech-savvy and everyone else. And even if the economy is only going through a transition similar to those it’s endured before, it is an extremely painful one for many workers, and that will have to be addressed somehow. Harvard’s Katz has shown that the United States prospered in the early 1900s in part because secondary education became accessible to many people at a time when employment in agriculture was drying up. The result, at least through the 1980s, was an increase in educated workers who found jobs in the industrial sectors, boosting incomes and reducing inequality. Katz’s lesson: painful long-term consequences for the labor force do not follow inevitably from technological changes.

Brynjolfsson himself says he’s not ready to conclude that economic progress and employment have diverged for good. “I don’t know whether we can recover, but I hope we can,” he says. But that, he suggests, will depend on recognizing the problem and taking steps such as investing more in the training and education of workers.

“We were lucky and steadily rising productivity raised all boats for much of the 20th century,” he says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

David RotmanEditor

As the editor of MIT Technology Review, I spend much of my time thinking about the types of stories and journalism that will be most valuable to our readers. What do curious, well-informed readers need to know about emerging technologies? As a… More writer, I am particularly interested these days in the intersection of chemistry, materials science, energy, manufacturing, and economics.

Credit

Noma Bar (Illustration); Data from Bureau of Labor Statistics (Productivity, Output, GDP Per Capita); International Federation of Robotics; CIA World Factbook (GDP by Sector); Bureau of Labor Statistics (Job Growth, Manufacturing Employment); D. Autor and D. Dorn, U.S. Census, American Community Survey, and Department of Labor (Change in Employment and Wages by Skill, Routine Jobs); Bureau of Labor Statistics (Productivity, Output, GDP Per Capita); International Federation of Robotics; CIA World Factbook (GDP by Sector)

Abstract

The wish to extend the human lifespan has a long tradition in many cultures. Optimistic views of the possibility of achieving this goal through the latest developments in medicine feature increasingly in serious scientific and philosophical discussion. The authors of this paper argue that research with the explicit aim of extending the human lifespan is both undesirable and morally unacceptable. They present three serious objections, relating to justice, the community and the meaning of life.

Keywords: life extension, ageing, meaning of life, community, global justice

The wish to extend the human lifespan has a long tradition in many cultures.1 Optimistic views of the possibility of achieving this goal through the latest developments in medicine feature increasingly in serious scientific and philosophical discussion.1,2,3,4,5 Focusing on interventions in biological ageing, one can distinguish between research that is first and foremost aimed at prolonging life by slowing or even arresting ageing processes and research that is directed at combating the diseases that seem to be intrinsically connected with biological ageing.6 We are not opposed to the latter interventions but focus on the former, increasing human life expectancy beyond the average as a primary goal, merely because there exists, as Glannon puts it, “the deeper conviction that there is intrinsic value in living much longer than we presently do, given that being alive is intrinsically valuable”.3

Although we agree that being alive is intrinsically valuable, we think that there is a fundamental difference between the desirability of being alive within the limits of the average life expectancy and the desirability of being alive beyond those limits. In the first case, we deal with the possession and continuation of something we have a right to maintain. In the second case, we are dealing with a kind of enhancement7 to which the concept of a “right to” is ill‐suited, and that raises a series of philosophical and ethical questions. Reflecting on the desirability of research that is explicitly aimed at life extension, we shall present three serious objections, relating to justice, to the community and to the meaning of life. They differ as regards their nature and cogency. We begin with the most compelling argument—justice.

The three arguments

Justice

The most obvious moral problem is the already existing “unequal death”. As Mauron argues, this inequality, which obtains both between the First World and the Third World and between rich and poor within Western welfare societies, is the main ethical obstacle. How can we justify trying to extend the lives of those who have more already?8

The figures speak for themselves: in a number of African countries south of the Sahara, life expectancy is less than 40 years. The average lifespan in rich and developed countries is 70–80 years. The causes of this inequality exceed the strictly medical realm. It is mainly the combination of AIDS with poverty that is responsible for this mortality.9,10 No fewer than 60% of all people on earth with HIV live in subSaharan Africa11—25–26 million people. Twelve million children have lost at least one parent, and in Zimbabwe 20.1% of all adults are infected.11

One possible objection to our argument could be that the existence of this global inequality simply does not present a problem for bioethics. These disparities may be acknowledged as scandalously unfair but are the responsibility of politicians, governments and non‐governmental organisations, not of bioethicists. This way of fending off bioethical responsibility, however, is based on a concept of bioethics that closes its eyes to the morally relevant complex interrelation between the health of populations and international justice. It reduces bioethics to the type of applied ethics that became dominant starting in the 1970s. This period gave birth to a highly sophisticated, politically harmless and typically Western bioethics, which mainly dealt with problems of developed and wealthy countries. In recent years, ethicists such as Solomon Benatar,12 James Dwyer13 and Paul Farmer14 have rightly tried to broaden the bioethical agenda. In a globalizing world, problems of ill health in the undeveloped nations are related to how the developed and wealthy nations use their political, financial and scientific powers. Contemporary bioethics, therefore, cannot limit itself to how and under what conditions new scientific developments may be applied but must also confront the question whether these developments contribute to a more just world.

A second possible objection to our argument refers to the principle of distributive justice and is formulated along utilitarian lines by Harris, among others. The fact that we have no means to treat all patients is no argument to qualify it unjust to treat some of them: “If immortality or increased life expectancy is a good, it is doubtful ethics to deny palpable goods to some people because we cannot provide them for all” (p529).2 Davis defends the same conclusion, using slightly different reasoning. To deny the Haves a treatment that they can afford because the Have‐nots cannot afford it “is justified only if doing so makes the Have‐nots more than marginally better off” (PW7).15 The burden for the Have‐nots of the availability of life‐extending treatments for the Haves has much less weight in comparison with the number of additional life years that the Haves would lose if life extension were prevented from becoming available.

Both utilitarian arguments are problematic in two respects. In the first place, they make no distinction between the right of (a minority of) Haves to maintain what they already have, such as certain medical treatments for age‐related diseases, and the right to become Have‐mores by research and development to enhance the total lifespan. This fundamental difference between the real and the potential has moral repercussions in the light of justice. Treatments that exist in reality but are not available to all rightly raise questions of distributive justice. Potential treatments, however, require prior questions: for what goals are they developed? are they worthwhile at all, and for whom? who will profit? who will be harmed? In the second place, by calculating only benefits and burdens, or burdens of different weights, they neglect the moral quality of certain states of affairs that can be considered wrong and unjust in se and that should be prevented from becoming even more wrong or unjust. They bypass important moral principles of equity and integrity. By focusing on how to justify the distribution of means that are not available to all, we sideline the whole issue of inequality in chances. The original problem of why some can be treated and others cannot is no longer considered. This moral blindness reminds us of the story of the French queen Marie Antoinette, who in 1789 was confronted with a furious crowd. Asking what was going on, she was told that these people were starving, because there was no bread. She replied, amazed, “Well, why don't they eat cake then?” With regard to extending the lifespan, we are not dealing with treatments (yet), but with the question of the desirability of research and development, and, consequently, of financial investments that will not diminish these global inequalities in life expectancy, or, even worse, may increase them.

Our efforts to prolong life, therefore, ought not to be separated from the more fundamental questions relating to integrity: given the problem of unequal death, can we morally afford to invest in research to extend life? The contemporary agenda of bioethics happens to be largely defined by dilemmas and problems raised by Western medicine and biomedical research. Recently, Lucke and Hall pleaded for more social research on public opinion regarding life extension.16 As a variation on their proposal, we suggest that it is relevant to know the opinions on life‐extension technology of all those people whose risk of dying before the age of 40 could be diminished by rather simple, low‐technology means.

Relational dimension

Life is always life with others, even when it is extended. Crucial, however, seems to be how this relatedness to others is interpreted. A liberal anthropology perceives human beings as primarily individuals, who relate to each other by contract and negotiations, motivated by self‐interest. The other person has an instrumental value, and can appear as a friend, a competitor or even an enemy. Also, the sum of all others, incorporated in the community or society, fulfils a merely instrumental value: the community or society is judged by the extent to which it facilitates its members to realise their individual life plan. In a liberal view, the good life is the good life for me, defined and measured by myself. Autonomy and authenticity are central values. Arguments in favour of life extension are often based on the presuppositions of liberalism.

In communitarian anthropology, human beings are viewed as social beings: relations with others belong to the essentials of what it is to live a human life. As Aristotle said (1097b12), a man is by his nature a political being, in the sense of belonging to a polis, or a community.17 Contrary to the liberal anthropology, the social context is not just an instrumental means to realize individual life plans, but the precondition for living a human life. Human beings cannot live without meaningful relations with others. Goods that are essential for a good life, such as friendship, are essentially goods that are bound to the social dimensions of life.

With respect to biological ageing, the two anthropological views can be combined. In the still‐hypothetical situation that extending biological age becomes a medical–technical option, it is primarily a matter of autonomy whether a subject wants to choose it. This freedom of choice fits with the liberal view. The communitarian view, however, stresses the importance of the social network as a condition sine qua non for a truly human life. This is not a mere psychological condition, in the sense that I feel better with others, but an ethical one: in order to realize a morally good life, I have to realize myself as a community being. Being with others as such is considered intrinsically valuable, not the fact that the other is “useful” for my purposes. This excludes the option that an extension of biological age is intrinsically valuable. It is valuable only if it also extends our life as communal beings. Living longer is valuable only if it results in living longer in meaningful relations. Quality of time outweighs quantity of time. The real ethical challenge for ageing societies, therefore, should be how to improve the conditions for life as a life in community, and not how to stop ageing as such.

The meaning of life

Our final argument is that life extension as an explicit aim is contrary to the wisdom of ages as contained in various religious and non‐religious spiritual traditions. Although all traditions agree that life is worthy and should not be taken (without good reason, or at all), there is always a notion that human beings miss the essence of life by focusing on the preservation of their self or “ego”.

Many spiritual and religious traditions make this point in the notion of truly human life by the decentring of the self. In the Christian tradition, as expresssed by Thomas Aquinas, for example, the notion of eternal life does not refer primarily to a prolongation of earthly life based on the conception of an immortal soul; rather, it refers to the fullness of a human life that can be reached to the extent that one's goal in life is no longer the preservation of the self, but the communion with and service to God and one's neighbour.18 The same thought is expressed in other monotheistic religions, such as Judaism and Islam. Turning to the Eastern world, we see that Hinduism, Buddhism and explicitly non‐religious spiritual approaches such as that of the Indian thinker Jiddu Krishnamurti all point to the importance of letting go of the ego.19

Traditions such as these converge in the observation that the more one's self is decentred, the more one loses interest in self‐preservation or extension of the biological lifespan. Modesty and the ability to give priority to seeking self‐flourishing by seeking the flourishing of other people seems to be a sign both of happiness and of a meaningful life.

We think that the world's spiritual traditions are worth listening to, because they are a rich and often ancient source of experience with the living of a meaningful life in various cultural contexts. When the wisdom of these different contexts converges, it seems likely that something of importance may appear. At least they make us aware that quality of life is not simply in the length of lifetime.

Could the wisdom of the spiritual traditions be inspired by the fact that human beings have to cope with their mortality, and seek an escape in transcendence? Although it may be true that this motivation is present among the followers of diverse spiritual traditions, we think that the traditions themselves are too sophisticated and well thought through to be accused of escapism. Moreover, there is a secular parallel to the experience of the decentring of the self as related to the experience of life's meaning.

As we reflect on the relation between time and experience, for instance, there is an interesting and important paradox to be observed: the more life is experienced as meaningful, the less we are aware of time. The activities that give us the most satisfaction and happiness are those in which we are totally absorbed. Performing music, doing sports, reading good books, making love, writing texts: there are many examples of activities that demand all our attention. In those activities that constitute human happiness there seem to be no time and space, no subject and object. From this one may infer that what we basically seek as human beings is not more time to live, but meaningful experiences. These are found by decentring activities, through which the quality of life is expanded and the desire for self‐preservation and life extension vanishes.

Cogency

We realize that these three arguments differ in cogency. The argument of justice is the strongest, because it has a common‐sense argumentative force that is recognised in most ethical theories. The second argument, regarding the social nature of human beings, derives its cogency from the willingness to critically consider and complete the presuppositions of one's moral theory. The third argument, introducing the meaning of life, is the most controversial: it is strongest for those who adhere to one of those traditions but weakest for those who do not.

If the three arguments are read in reverse order, we think that they can endorse one another, in the sense that those who search for a meaningful life in the decentring of the self will acknowledge the importance of the community and of global justice. Because we address this article to a wider audience, however, we prefer to begin with the argument of justice.

Conclusions

Is it possible, after what has been said so far, to argue that no individual should have the option for life extension if science progresses enough to offer it? We don't think so. Life is an intrinsic good, and individuals who are ready to accept all ethical objections presented so far are not different from those who choose to live in luxury without feeling the moral obligation of justice. In this paper, however, we focus on the ethical problems of investing in research aimed at further life extension. Since such research has an institutional aspect related to public funding, we think that this aspect requires thorough reflection and dialogue by biogerontologists and their scientific organizations, by ethicists and philosophers, and by society at large. Juengst et al6,7 repeatedly formulate a similar plea. Among others, the question must be discussed to what extent life extension contributes to the public good. The concept of “public good”, however, is slightly ambiguous. It comes close to “public interest”, which Jennings et al20 frame as the aggregate of individual private interests of individuals. As opposed to this, the concept of the common good entails a society where individuals inextricably bind up their own good with the good of the whole. It forces reflection on the question of whether living longer is good for me as a human being, and whether a society whose members have a much longer life than is the case at present would be a better society. With regard to the benefits for me as a human being, we presented two objections, centred on the meaningful life and on life as a communal being. A reply to both objections could be that issues of meaning and of communities are highly personal matters: in both domains, people have to find their own position and possess the right of free choice. But it is also true that personal answers and choices can be enriched by being embedded in traditions of wisdom with regard to how to live a human live. It is this embedding that we intend to add to the discussion on life‐extending research. With regard to a better society, in a globalizing world as ours is, there is a moral challenge to expand our view of the common good to encompass good for all, worldwide. This expansion inevitably raises the urgent question of whether we can morally afford, as a question of moral integrity, to invest time and money in trying to extend our lives while sidelining the whole issue of unequal death.

Footnotes

Competing interests: None.

References

1. Gordijn B. Medical utopias: ethical reflections about emerging medical technologies. Leuven: Peeters, 2006

2. Harris J. Immortal ethics. Ann N Y Acad Sci 20041019527–534. [PubMed]

3. Glannon W. Extending the human life span. J Med Philos 200227339–354. [PubMed]

4. Harris J, Holm S. Extending human life span and the precautionary paradox. J Med Philos 200227355–368. [PubMed]

5. Davis J K. Collective suttee: is it unjust to develop life extension if it will not be possible to provide it to everyone? Ann N Y Acad Sci 20041019535–541. [PubMed]

6. Juengst E T, Binstock R H, Mehlman M J. et al Aging: Antiaging research and the need for public dialogue. Science 20032991323 [PubMed]

7. Juengst E T, Binstock R H, Mehlman M. et al Biogerontology, “anti‐aging medicine,” and the challenges of human enhancement. Hastings Cent Rep 20033321–30. [PubMed]

8. Mauron A. The choosy reaper. EMBO Rep 20056S67–S71. [PMC free article][PubMed]

9. Dorling D, Shaw M, Davey Smith G. Global inequality of life expectancy due to AIDS. BMJ 2006332662–664. [PMC free article][PubMed]

10. Dwyer J. Global health and justice. Bioethics 200519460–475. [PubMed]

11. http://www.unaids.org/en/Regions_Countries (accessed 22 Aug 2007) and navigate to region or country mentioned

12. Benatar S. Bioethics: power and injustice: IAB presidential address. Bioethics 200317387–398. [PubMed]

13. Dwyer J. Teaching global bioethics. Bioethics 200317432–446. [PubMed]

14. Farmer P, Gastineau Campos N. Rethinking medical ethics: a view from below. Developing World Bioeth 2004417–41. [PubMed]

15. Davis J K. The prolongevists speak up: the life‐extension ethics session at the 10th Annual Congress of the International Association of Biomedical Gerontology. Am J Bioeth 20044W6–W8. [PubMed]

16. Lucke J, Hall W. Who wants to live forever? EMBO Rep 2005698–102. [PMC free article][PubMed]

17. Aristoteles Ethica. Groningen: Historische Uitgeverij, 1999

18. Leget C. Living with God: Thomas Aquinas on the relation between life on earth and ‘life' after death. Leuven: Peeters, 1997

19. Krishnamurti J. The first and last freedom. New York: Harper & Brothers, 1954

20. Jennings B, Callahan D, Wolf S M. The professions: public interest and common good. Hastings Cent Rep 1987173–10.