domingo, 31 de agosto de 2014

Remembering some of the Rolls-Royce history

 

Rolls Royce

Dreams, sweet dreams.

 

Don't be afraid

A Steve Jobs quotation

 

Lembrar que você vai morrer

Melhorando a tradução:

Lembrar-se que um dia você irá morrer é a melhor maneira que conheço de evitar a armadilha de pensar que irá perder algo muito importante.  Você já está nú, não há nada para esconder. Então não há razão para não seguir seu coração.

A Life of Its Own

 

If the science truly succeeds, it will make it possible to supplant the world created by Darwinian evolution with one created by us.

If the science truly succeeds, it will make it possible to supplant the world created by Darwinian evolution with one created by us. Credit JOOST SWARTE

The first time Jay Keasling remembers hearing the word “artemisinin,” about a decade ago, he had no idea what it meant. “Not a clue,” Keasling, a professor of biochemical engineering at the University of California at Berkeley, recalled. Although artemisinin has become the world’s most important malaria medicine, Keasling wasn’t an expert on infectious diseases. But he happened to be in the process of creating a new discipline, synthetic biology, which—by combining elements of engineering, chemistry, computer science, and molecular biology—seeks to assemble the biological tools necessary to redesign the living world.

Scientists have been manipulating genes for decades; inserting, deleting, and changing them in various microbes has become a routine function in thousands of labs. Keasling and a rapidly growing number of colleagues around the world have something more radical in mind. By using gene-sequence information and synthetic DNA, they are attempting to reconfigure the metabolic pathways of cells to perform entirely new functions, such as manufacturing chemicals and drugs. Eventually, they intend to construct genes—and new forms of life—from scratch. Keasling and others are putting together a kind of foundry of biological components—BioBricks, as Tom Knight, a senior research scientist at the Massachusetts Institute of Technology, who helped invent the field, has named them. Each BioBrick part, made of standardized pieces of DNA, can be used interchangeably to create and modify living cells.

“When your hard drive dies, you can go to the nearest computer store, buy a new one, and swap it out,” Keasling said. “That’s because it’s a standard part in a machine. The entire electronics industry is based on a plug-and-play mentality. Get a transistor, plug it in, and off you go. What works in one cell phone or laptop should work in another. That is true for almost everything we build: when you go to Home Depot, you don’t think about the thread size on the bolts you buy, because they’re all made to the same standard. Why shouldn’t we use biological parts in the same way?” Keasling and others in the field, who have formed bicoastal clusters in the Bay Area and in Cambridge, Massachusetts, see cells as hardware, and genetic code as the software required to make them run. Synthetic biologists are convinced that, with enough knowledge, they will be able to write programs to control those genetic components, programs that would let them not only alter nature but guide human evolution as well.

No scientific achievement has promised so much, and none has come with greater risks or clearer possibilities for deliberate abuse. The benefits of new technologies—from genetically engineered food to the wonders of pharmaceuticals—often have been oversold. If the tools of synthetic biology succeed, though, they could turn specialized molecules into tiny, self-contained factories, creating cheap drugs, clean fuels, and new organisms to siphon carbon dioxide from the atmosphere.

In 2000, Keasling was looking for a chemical compound that could demonstrate the utility of these biological tools. He settled on a diverse class of organic molecules known as isoprenoids, which are responsible for the scents, flavors, and even colors in many plants: eucalyptus, ginger, and cinnamon, for example, as well as the yellow in sunflowers and the red in tomatoes. “One day, a graduate student stopped by and said, ‘Look at this paper that just came out on amorphadiene synthase,’ ” Keasling told me as we sat in his office in Emeryville, across the Bay Bridge from San Francisco. He had recently been named C.E.O. of the Department of Energy’s new Joint BioEnergy Institute, a partnership of three national laboratories and three research universities, led by the Lawrence Berkeley National Laboratory. The consortium’s principal goal is to design and manufacture artificial fuels that emit little or no greenhouse gases—one of President Obama’s most frequently cited priorities.

Keasling wasn’t sure what to tell his student. “ ‘Amorphadiene,’ I said. ‘What’s that?’ He told me that it was a precursor to artemisinin, an effective anti-malarial. I had never worked on malaria. So I got to studying and quickly realized that this precursor was in the general class we were planning to investigate. And I thought, Amorphadiene is as good a target as any. Let’s work on that.”

Malaria infects as many as five hundred million of the world’s poorest people every year and kills up to a million, most of whom are children under the age of five. For centuries, the standard treatment was quinine, and then the chemically related compound chloroquine. At ten cents per treatment, chloroquine was cheap and simple to make, and it saved millions of lives. By the early nineties, however, the most virulent malaria parasite—Plasmodium falciparum—had grown largely resistant to the drug. Worse, the second line of treatment, sulfadoxine-pyrimethanine, or SP, also failed widely. Artemisinin, when taken in combination with other drugs, has become the only consistently successful treatment that remains. (Reliance on any single drug increases the chances that the malaria parasite will develop resistance.) Known in the West as Artemisia annua, or sweet wormwood, the herb that contains artemisinic acid grows wild in many places, but supplies vary widely and so does the price.

Depending so heavily on artemisinin, while unavoidable, has serious drawbacks: combination therapy costs between ten and twenty times as much as chloroquine, and, despite increasing assistance from international charities, that is too much money for most Africans or their governments. Artemisinin is not easy to cultivate. Once harvested, the leaves and stems have to be processed rapidly or they will be destroyed by exposure to ultraviolet light. Yields are low, and production is expensive.

Although several thousand Asian and African farmers have begun to plant the herb, the World Health Organization expects that for the next several years the annual demand—as many as five hundred million courses of treatment per year—will far exceed the supply. Should that supply disappear, the impact would be incalculable. “Losing artemisinin would set us back years, if not decades,” Kent Campbell, a former chief of the malaria branch at the Centers for Disease Control and Prevention, and director of the Malaria Control Program at the nonprofit health organization PATH, said. “One can envision any number of theoretical public-health disasters in the world. But this is not theoretical. This is real. Without artemisinin, millions of people could die.”

Keasling realized that the tools of synthetic biology, if properly deployed, could dispense with nature entirely, providing an abundant new source of artemisinin. If each cell became its own factory, churning out the chemical required to make the drug, there would be no need for an elaborate and costly manufacturing process, either. Why not try to produce it from genetic parts by constructing a cell to manufacture amorphadiene? Keasling and his team would have to dismantle several different organisms, then use parts from nearly a dozen of their genes to cobble together a custom-built package of DNA. They would then need to construct a new metabolic pathway, the chemical circuitry that a cell needs to do its job—one that did not exist in the natural world. “We have got to the point in human history where we simply do not have to accept what nature has given us,” he told me.

By 2003, the team reported its first success, publishing a paper in Nature Biotechnology that described how the scientists had created that new pathway, by inserting genes from three organisms into E. coli, one of the world’s most common bacteria. That research helped Keasling secure a $42.6-million grant from the Bill and Melinda Gates Foundation. Keasling had no interest in simply proving that the science worked; he wanted to do it on a scale that the world could use to fight malaria. “Making a few micrograms of artemisinin would have been a neat scientific trick,” he said. “But it doesn’t do anybody in Africa any good if all we can do is a cool experiment in a Berkeley lab. We needed to make it on an industrial scale.” To translate the science into a product, Keasling helped start a new company, Amyris Biotechnologies, to refine the raw organism, then figure out how to produce it more efficiently. Within a decade, Amyris had increased the amount of artemisinic acid that each cell could produce by a factor of one million, bringing down the cost of the drug from as much as ten dollars for a course of treatment to less than a dollar.

Amyris then joined with the Institute for OneWorld Health, in San Francisco, a nonprofit drugmaker, and, in 2008, they signed an agreement with the Paris-based pharmaceutical company Sanofi-Aventis to make the drug, which they hope to have on the market by 2012. The scientific response has been reverential—their artemisinin has been seen as the first bona-fide product of synthetic biology, proof of a principle that we need not rely on the whims of nature to address the world’s most pressing crises. But some people wonder what synthetic artemisinin will mean for the thousands of farmers who have begun to plant the wormwood crop. “What happens to struggling farmers when laboratory vats in California replace farms in Asia and East Africa?” Jim Thomas, a researcher with ETC Group, a technology watchdog based in Canada, asked. Thomas has argued that there has been little discussion of the ethical and cultural implications of altering nature so fundamentally. “Scientists are making strands of DNA that have never existed,” Thomas said. “So there is nothing to compare them to. There are no agreed mechanisms for safety, no policies.”

Keasling, too, believes that the nation needs to consider the potential impact of this technology, but he is baffled by opposition to what should soon become the world’s most reliable source of cheap artemisinin. “Just for a moment, imagine that we replaced artemisinin with a cancer drug,” he said. “And let’s have the entire Western world rely on some farmers in China and Africa who may or may not plant their crop. And let’s have a lot of American children die because of that. Look at the world and tell me we shouldn’t be doing this. It’s not people in Africa who see malaria who say, Whoa, let’s put the brakes on.”

Artemisinin is the first step in what Keasling hopes will become a much larger program. “We ought to be able to make any compound produced by a plant inside a microbe,” he said. “We ought to have all these metabolic pathways. You need this drug: O.K., we pull this piece, this part, and this one off the shelf. You put them into a microbe, and two weeks later out comes your product.”

That’s what Amyris has done in its efforts to develop new fuels. “Artemisinin is a hydrocarbon, and we built a microbial platform to produce it,” Keasling said. “We can remove a few of the genes to take out artemisinin and put in a different gene, to make biofuels.” Amyris, led by John Melo, who spent years as a senior executive at British Petroleum, has already engineered three microbes that can convert sugar to fuel. “We still have lots to learn and lots of problems to solve,” Keasling said. “I am well aware that makes some people anxious, and I understand why. Anything so powerful and new is troubling. But I don’t think the answer to the future is to race into the past.”

For the first four billion years, life on Earth was shaped entirely by nature. Propelled by the forces of selection and chance, the most efficient genes survived, and evolution insured that they would thrive. The long, beautiful Darwinian process of creeping forward by trial and error, struggle and survival, persisted for millennia. Then, about ten thousand years ago, our ancestors began to gather in villages, grow crops, and domesticate animals. That led to stone axes and looms, which in turn led to better crops and a varied food supply that could feed a larger civilization. Breeding of goats and pigs gave way to the fabrication of metal and machines. Throughout it all, new species, built on the power of their collected traits, emerged, while others were cast aside.

By the beginning of the twenty-first century, our ability to modify the smallest components of life through molecular biology had endowed humans with a power that even those who exercise it most proficiently cannot claim to fully comprehend. Human mastery over nature has been predicted for centuries—Bacon insisted on it, Blake feared it profoundly. Little more than a hundred years have passed, however, since Gregor Mendel demonstrated that the defining characteristics of a pea plant—its shape, its size, and the color of the seeds, for example—are transmitted from one generation to the next in ways that can be predicted, repeated, and codified.

Since then, the central project of biology has been to break that code and learn to read it—to understand how DNA creates and perpetuates life. The physiologist Jacques Loeb considered artificial synthesis of life the goal of biology. In 1912, Loeb, one of the founders of modern biochemistry, wrote that there was no evidence that “the artificial production of living matter is beyond the possibilities of science,” and declared, “We must either succeed in producing living matter artificially, or we must find the reasons why this is impossible.”

In 1946, the Nobel Prize-winning geneticist Hermann J. Muller attempted to do that. By demonstrating that exposure to X rays can cause mutations in the genes and chromosomes of living cells, he was the first to prove that heredity could be affected by something other than natural selection. He wasn’t entirely sure that people would use that information responsibly, though. “If we did attain to any such knowledge or powers there is no doubt in my mind that we would eventually use them,” Muller said. “Man is a megalomaniac among animals—if he sees mountains he will try to imitate them by pyramids, and if he sees some grand process like evolution, and thinks it would be at all possible for him to be in on that game, he would irreverently have to have his whack at that too.”

The theory of evolution explained that every species on earth is related in some way to every other species; more important, we each carry a record of that history in our body. In 1953, James Watson and Francis Crick began to make it possible to understand why, by explaining how DNA arranges itself. The language of just four chemical letters—adenine, cytosine, guanine, and thymine—comes in the form of enormous chains of nucleotides. When they are joined, the arrangement of their sequences determines how each human differs from all others and from all other living beings.

By the nineteen-seventies, recombinant-DNA technology permitted scientists to cut long, unwieldy molecules of nucleotides into digestible sentences of genetic letters and paste them into other cells. Researchers could suddenly combine the genes of two creatures that would never have been able to mate in nature. As promising as these techniques were, they also made it possible for scientists to transfer viruses—and microbes that cause cancer—from one organism to another. That could create diseases anticipated by no one and for which there would be no natural protection, treatment, or cure. In 1975, scientists from around the world gathered at the Asilomar Conference Center, in Northern California, to discuss the challenges presented by this new technology. They focussed primarily on laboratory and environmental safety, and concluded that the field required little regulation. (There was no real discussion of deliberate abuse—at the time, there didn’t seem to be any need.)

Looking back nearly thirty years later, one of the conference’s organizers, the Nobel laureate Paul Berg, wrote, “This unique conference marked the beginning of an exceptional era for science and for the public discussion of science policy. Its success permitted the then contentious technology of recombinant DNA to emerge and flourish. Now the use of the recombinant DNA technology dominates research in biology. It has altered both the way questions are formulated and the way solutions are sought.”

Decoding sequences of DNA was tedious. It could take a scientist a year to complete a stretch that was ten or twelve base pairs long. (Our DNA consists of three billion such pairs.) By the late nineteen-eighties, automated sequencing had simplified the procedure, and today machines can process that information in seconds. Another new tool—polymerase chain reaction—completed the merger of the digital and biological worlds. Using PCR, a scientist can take a single DNA molecule and copy it many times, making it easier to read and to manipulate. That permits scientists to treat living cells like complex packages of digital information that happen to be arranged in the most elegant possible way.

Using such techniques, researchers have now resurrected the DNA of the Tasmanian tiger, the world’s largest carnivorous marsupial, which has been extinct for more than seventy years. In 2008, scientists from the University of Melbourne and the University of Texas M. D. Anderson Cancer Center, in Houston, extracted DNA from tissue that had been preserved in the Museum Victoria, in Melbourne. They took a fragment of DNA that controlled the production of a collagen gene from the tiger and inserted it into a mouse embryo. The DNA switched on just the right gene, and the embryo began to churn out collagen. That marked the first time that any material from an extinct creature other than a virus has functioned inside a living organism.

It will not be the last. A team from Pennsylvania State University, working with hair samples from two woolly mammoths—one of them sixty thousand years old and the other eighteen thousand—has tentatively figured out how to modify that DNA and place it inside an elephant’s egg. The mammoth could then be brought to term in an elephant mother. “There is little doubt that it would be fun to see a living, breathing woolly mammoth—a shaggy, elephantine creature with long curved tusks who reminds us more of a very large, cuddly stuffed animal than of a T. Rex.,” the Times editorialized soon after the discovery was announced. “We’re just not sure that it would be all that much fun for the mammoth.”

The ultimate goal, however, is to create a synthetic organism made solely from chemical parts and blueprints of DNA. In the mid-nineties, Craig Venter, working at the Institute for Genomic Research, and his colleagues Clyde Hutchison and Hamilton Smith began to wonder whether they could pare life to its most basic components and then use those genes to create such an organism. They began modifying the genome of a tiny bacterium called Mycoplasma genitalium, which contained four hundred and eighty-two genes (humans have about twenty-three thousand) and five hundred and eighty thousand letters of genetic code, arranged on one circular chromosome—the smallest genome of any cell that has been grown in laboratory cultures. Venter and his colleagues then removed genes one by one to find a minimal set that could sustain life.

Venter called the experiment the Minimal Genome Project. By the beginning of 2008, his team had pieced together thousands of chemically synthesized fragments of DNA and assembled a new version of the organism. Then, using nothing but chemicals, they produced from scratch the entire genome of Mycoplasma genitalium. “Nothing in our methodology restricts its use to chemically synthesized DNA,” Venter noted in the report of his work, which was published in Science. “It should be possible to assemble any combination of synthetic and natural DNA segments in any desired order.” That may turn out to be one of the most understated asides in the history of science. Next, Venter intends to transplant the artificial chromosome into the walls of another cell and then “boot it up,” thereby making a new form of life that would then be able to replicate its own DNA—the first truly artificial organism. (Activists have already named the creation Synthia.) Venter hopes that Synthia and similar products will serve essentially as vessels that can be modified to carry different packages of genes. One package might produce a specific drug, for example, and another could have genes programmed to digest carbon in the atmosphere.

In 2007, the theoretical physicist Freeman Dyson, after having visited both the Philadelphia Flower Show and the Reptile Show in San Diego, wrote an essay in The New York Review of Books, noting that “every orchid or rose or lizard or snake is the work of a dedicated and skilled breeder. There are thousands of people, amateurs and professionals, who devote their lives to this business.” This, of course, we have been doing in one way or another for millennia. “Now imagine what will happen when the tools of genetic engineering become accessible to these people.”

It is only a matter of time before domesticated biotechnology presents us with what Dyson described as an “explosion of diversity of new living creatures. . . . Designing genomes will be a personal thing, a new art form as creative as painting or sculpture. Few of the new creations will be masterpieces, but a great many will bring joy to their creators and variety to our fauna and flora.”

Biotech games, played by children “down to kindergarten age but played with real eggs and seeds,” could produce entirely new species—as a lark. “These games will be messy and possibly dangerous,” Dyson wrote. “Rules and regulations will be needed to make sure that our kids do not endanger themselves and others. The dangers of biotechnology are real and serious.”

Life on Earth proceeds in an arc—one that began with the big bang, and evolved to the point where a smart teenager is capable of inserting a gene from a cold-water fish into a strawberry, to help protect it from the frost. You don’t have to be a Luddite—or Prince Charles, who, famously, has foreseen a world reduced to gray goo by avaricious and out-of-control technology—to recognize that synthetic biology, if it truly succeeds, will make it possible to supplant the world created by Darwinian evolution with one created by us.

“Many a technology has at some time or another been deemed an affront to God, but perhaps none invites the accusation as directly as synthetic biology,” the editors of Nature—who nonetheless support the technology—wrote in 2007. “For the first time, God has competition.”

“What if we could liberate ourselves from the tyranny of evolution by being able to design our own offspring?” Drew Endy asked, the first time we met in his office at M.I.T., where, until the summer of 2008, he was assistant professor of biological engineering. (That September, he moved to Stanford.) Endy is among the most compelling evangelists of synthetic biology. He is also perhaps its most disturbing, because, although he displays a childlike eagerness to start engineering new creatures, he insists on discussing both the prospects and the dangers of his emerging discipline in nearly any forum he can find. “I am talking about building the stuff that runs most of the living world,” he said. “If this is not a national strategic priority, what possibly could be?”

Endy, who was trained as a civil engineer, spent his youth fabricating worlds out of Lincoln Logs and Legos. Now he would like to build living organisms. Perhaps it was the three well-worn congas sitting in the corner of Endy’s office, or the choppy haircut that looked like something he might have got in a tree house, or the bicycle dangling from his wall—but, when he speaks about putting together new forms of life, it’s hard not to think of that boy and his Legos.

Endy made his first mark on the world of biology by nearly failing the course in high school. “I got a D,” he said. “And I was lucky to get it.” While pursuing an engineering degree at Lehigh University, Endy took a course in molecular genetics. He spent his years in graduate school modelling bacterial viruses, but they are complex, and Endy craved simplicity. That’s when he began to think about putting cellular components together.

Never forgetting the secret of Legos—they work because you can take any single part and attach it to any other—in 2005 Endy and colleagues on both coasts started the BioBricks Foundation, a nonprofit organization formed to register and develop standard parts for assembling DNA. Endy is not the only scientist, or even the only synthetic biologist, to translate a youth spent with blocks into a useful scientific vocabulary. “The notion of pieces fitting together—whether those pieces are integrated circuits, microfluidic components, or molecules—guides much of what I do in the laboratory,” the physicist and synthetic biologist Rob Carlson writes in his new book, “Biology Is Technology: The Promise, Peril, and Business of Engineering Life.” “Some of my best work has come together in my mind’s eye accompanied by what I swear was an audible click.”

The BioBricks registry is a physical repository, but it is also an online catalogue. If you want to construct an organism, or engineer it in new ways, you can go to the site as you would one that sells lumber or industrial pipes. The constituent parts of DNA—promoters, ribosome-binding sites, plasmid backbones, and thousands of other components—are catalogued, explained, and discussed. It is a kind of theoretical Wikipedia of future life forms, with the added benefit of actually providing the parts necessary to build them.

I asked Endy why he thought so many people seem to be repelled by the idea of constructing new forms of life. “Because it’s scary as hell,” he said. “It’s the coolest platform science has ever produced, but the questions it raises are the hardest to answer.” If you can sequence something properly and you possess the information for describing that organism—whether it’s a virus, a dinosaur, or a human being—you will eventually be able to construct an artificial version of it. That gives us an alternate path for propagating living organisms.

The natural path is direct descent from a parent—from one generation to the next. But that process is filled with errors. (In Darwin’s world, of course, a certain number of those mutations are necessary.) Endy said, “If you could complement evolution with a secondary path, decode a genome, take it off- line to the level of information”—in other words, break it down to its specific sequences of DNA the way one would break down the code in a software program—“we can then design whatever we want, and recompile it,” which could permit scientists to prevent many genetic diseases. “At that point, you can make disposable biological systems that don’t have to produce offspring, and you can make much simpler organisms.”

Endy stopped long enough for me to digest the fact that he was talking about building our own children. “If you look at human beings as we are today, one would have to ask how much of our own design is constrained by the fact that we have to be able to reproduce,” he said. In fact, those constraints are significant. In theory, at least, designing our own offspring could make those constraints disappear. Before speaking about that, however, it would be necessary to ask two essential questions: What sorts of risk does that bring into play, and what sorts of opportunity?

The deeply unpleasant risks associated with synthetic biology are not hard to imagine: who would control this technology, who would pay for it, and how much would it cost? Would we all have access or, as in the 1997 film “Gattaca,” which envisaged a world where the most successful children were eugenically selected, would there be genetic haves and have-nots and a new type of discrimination—genoism—to accompany it? Moreover, how safe can it be to manipulate and create life? How likely are accidents that would unleash organisms onto a world that is not prepared for them? And will it be an easy technology for people bent on destruction to acquire? “We are talking about things that have never been done before,” Endy said. “If the society that powered this technology collapses in some way, we would go extinct pretty quickly. You wouldn’t have a chance to revert back to the farm or to the pre-farm. We would just be gone. ”

Those fears have existed since humans began to transplant genes in crops. They are the central reason that opponents of genetically engineered food invoke the precautionary principle, which argues that potential risks must always be given more weight than possible benefits. That is certainly the approach suggested by people like Jim Thomas, of ETC, who describes Endy as “the alpha Synthusiast.” But he also regards Endy as a reflective scientist who doesn’t discount the possible risks of his field. “To his credit, I think he’s the one who’s most engaged with these issues,” Thomas said.

The debate over genetically engineered food has often focussed on theoretical harm rather than on tangible benefits. “If you build a bridge and it falls down, you are not going to be permitted to design bridges ever again,” Endy said. “But that doesn’t mean we should never build a new bridge. There we have accepted the fact that risks are inevitable.” He believes the same should be true of engineering biology.

We also have to think about our society’s basic goals and how this science might help us achieve them. “We have seen an example with artemisinin and malaria,” Endy said. “Maybe we could avoid diseases completely. That might require us to go through a transition in medicine akin to what happened in environmental science and engineering after the end of the Second World War. We had industrial problems, and people said, Hey, the river’s on fire—let’s put it out. And, after the nth time of doing that, people started to say, Maybe we shouldn’t make factories that put shit into the river. So let’s collect all the waste. That turns out to be really expensive, because then we have to dispose of it. Finally, people said, Let’s redesign the factories so that they don’t make that crap.”

Endy pointed out that we are spending trillions of dollars on health care and that preventing disease is obviously more desirable than treating it. “My guess is that our ultimate solution to the crisis of health-care costs will be to redesign ourselves so that we don’t have so many problems to deal with. But note,” he stressed, “you can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence.

“These are powerful choices. Think about what happens when you really can print the genome of your offspring. You could start with your own sequence, of course, and mash it up with your partner, or as many partners as you like. Because computers won’t care. And, if you wanted evolution, you can include random number generators.” That would have the effect of introducing the element of chance into synthetic design.

Although Endy speaks with passion about the biological future, he acknowledges how little scientists know. “It is important to unpack some of the hype and expectation around what you can do with biotechnology as a manufacturing platform,” he said. “We have not scratched the surface. But how far will we be able to go? That question needs to be discussed openly, because you can’t address issues of risk and society unless you have an answer.”

Answers, however, are not yet available. The inventor and materials scientist Saul Griffith has estimated that powering our planet requires between fifteen and eighteen terawatts of energy. How much of that could we manufacture with the tools of synthetic biology? Estimates range between five and ninety terawatts. “If it turns out to be the lower figure, we are screwed,” Endy said. “Because why would we take this risk if we cannot create much energy? But, if it’s the top figure, then we are talking about producing five times the energy we need on this planet and doing it in an environmentally benign way. The benefits in relation to the risks of using this new technology would be unquestioned. But I don’t know what the number will be, and I don’t think anybody can know at this point. At a minimum, then, we ought to acknowledge that we are in the process of figuring that out and the answers won’t be easy to provide.

“It’s very hard for me to have a conversation about these issues, because people adopt incredibly defensive postures,” Endy continued. “The scientists on one side and civil-society organizations on the other. And, to be fair to those groups, science has often proceeded by skipping the dialogue. But some environmental groups will say, Let’s not permit any of this work to get out of a laboratory until we are sure it is all safe. And as a practical matter that is not the way science works. We can’t come back decades later with an answer. We need to develop solutions by doing them. The potential is great enough, I believe, to convince people it’s worth the risk.”

I wondered how much of this was science fiction. Endy stood up. “Can I show you something?” he asked, as he walked over to a bookshelf and grabbed four gray bottles. Each one contained about half a cup of sugar, and each had a letter on it: A, T, C, or G, for the four nucleotides in our DNA. “You can buy jars of these chemicals that are derived from sugarcane,” he said. “And they end up being the four bases of DNA in a form that can be readily assembled. You hook the bottles up to a machine, and into the machine comes information from a computer, a sequence of DNA—like T-A-A-T-A-G-C-A-A. You program in whatever you want to build, and that machine will stitch the genetic material together from scratch. This is the recipe: you take information and the raw chemicals and compile genetic material. Just sit down at your laptop and type the letters and out comes your organism.”

We don’t have machines that can turn those sugars into entire genomes yet. Endy shrugged. “But I don’t see any physical reason why we won’t,” he said. “It’s a question of money. If somebody wants to pay for it, then it will get done.” He looked at his watch, apologized, and said, “I’m sorry, we will have to continue this discussion another day, because I have an appointment with some people from the Department of Homeland Security.”

I was a little surprised. “They are asking the same questions as you,” he said. “They want to know how far is this really going to go.”

Scientists skipped a step at the birth of biotechnology, thirty-five years ago, moving immediately to products without first focussing on the tools required to make them. Using standard biological parts, a synthetic biologist or biological engineer can already, to some extent, program living organisms in the same way a computer scientist can program a computer. However, genes work together in ways that are staggeringly complex; proteins produced by one will counteract—or enhance—those made by another. We are far from the point where scientists might yank a few genes off the shelf, mix them together, and produce a variety of products. But the registry is growing rapidly—and so is the knowledge needed to drive the field forward.

Research in Endy’s Stanford lab has been largely animated by his fascination with switches that turn genes on and off. He and his students are attempting to create genetically encoded memory systems, and his current goal is to construct a cell that can count to two hundred and fifty-six—a number derived from the mathematics of Basic computer code. Solving the practical challenges will not be easy, since cells that count will need to send reliable signals when they divide and remember that they did.

“If the cells in our bodies had a little memory, think what we could do,” Endy said the next time we talked. I wasn’t quite sure what he meant. “You have memory in your phone,” he explained. “Think of all the information it allows you to store. The phone and the technology on which it is based do not function inside cells. But if we could count to two hundred, using a system that was based on proteins and DNA and RNA—well, now, all of a sudden we would have a tool that gives us access to computing and memory that we just don’t have.

“Do you know how we study aging?” Endy continued. “The tools we use today are almost akin to cutting a tree in half and counting the rings. But if the cells had a memory we could count properly. Every time a cell divides, just move the counter by one. Maybe that will let me see them changing with a precision nobody can have today. Then I could give people controllers to start retooling those cells. Or we could say, Wow, this cell has divided two hundred times, it’s obviously lost control of itself and become cancer. Kill it. That lets us think about new therapies for all kinds of diseases.”

Synthetic biology is changing so rapidly that predictions seem pointless. Even that fact presents people like Endy with a new kind of problem. “Wayne Gretzky once said, ‘I skate to where the puck is going to be.’ That’s what you do to become a great hockey player,” Endy told me. “But where do you skate when the puck is accelerating at the speed of a rocket, when the trajectory is impossible to follow? Whom do you hire and what do we ask them to do? Because what preoccupies our finest minds today will be a seventh-grade science project in five years. Or three years.

“We are surfing an exponential now, and, even for people who pay attention, surfing an exponential is a really tricky thing to do. And when the exponential you are surfing has the capacity to impact the world in such a fundamental way, in ways we have never before considered, how do you even talk about that? ”

For decades, people have invoked Moore’s law: the number of transistors that could fit onto a silicon chip would double every two years, and so would the power of computers. When the I.B.M. 360 computer was released, in 1964, the top model came with eight megabytes of main memory, and cost more than two million dollars. Today, cell phones with a thousand times the memory of that computer can be bought for about a hundred dollars.

In 2001, Rob Carlson, then a research fellow at the Molecular Sciences Institute, in Berkeley, decided to examine a similar phenomenon: the speed at which the capacity to synthesize DNA was growing. He produced what has come to be known as the Carlson curve, and it shows a rate that mirrors Moore’s law—and has even begun to exceed it. The automated DNA synthesizers used in thousands of labs cost a hundred thousand dollars a decade ago. Now they cost less than ten thousand dollars, and, most days, at least a dozen used synthesizers are for sale on eBay—for less than a thousand dollars.

Between 1977, when Frederick Sanger published the first paper on automatic DNA sequencing, and 1995, when the Institute for Genomic Research reported the first bacterial-genome sequence, the field moved slowly. It took the next six years to complete the first draft of the immeasurably more complex human genome, and six years after that, in 2007, scientists from around the world began mapping the full genomes of more than a thousand people. The Harvard geneticist George Church’s Personal Genome Project now plans to sequence more than a hundred thousand.

In 2003, when Endy was still at M.I.T., he and his colleagues Tom Knight, Randy Rettberg, and Gerald Sussman founded iGEM—the International Genetically Engineered Machine competition—whose purpose is to promote the building of biological systems from standard parts. In 2006, a team of Endy’s undergraduate students used BioBrick parts to genetically reprogram E. coli (which normally smells awful) to smell like wintergreen while it grows and like bananas when it is finished growing. They named their project Eau d’E Coli. By 2008, with more than a thousand students from twenty-one countries participating, the winning team—a group from Slovenia—used biological parts that it had designed to create a vaccine for the stomach bug Helicobacter pylori, which causes ulcers. There are no such working vaccines for humans. So far, the team has tested its creation on mice, with promising results.

This is open-source biology, where intellectual property is shared. What’s available to idealistic students, of course, would also be available to terrorists. Any number of blogs offer advice about everything from how to preserve proteins to the best methods for desalting DNA. Openness like that can be frightening, and there have been calls for tighter control of the technology. Carlson, among many others, believes that strict regulations are unlikely to succeed. Several years ago, with very few tools other than a credit card, he opened his own biotechnology company, Biodesic, in the garage of his Seattle home—a biological version of the do-it-yourself movement that gave birth to so many computer companies, including Apple.

The product that he developed enables the identification of proteins using DNA technology. “It’s not complex,” Carlson told me, “but I wanted to see what I could accomplish using mail order and synthesis.” A great deal, it turned out. Carlson designed the molecule on his laptop, then sent the sequence to a company that synthesizes DNA. Most of the instruments could be bought on eBay (or, occasionally, on LabX, a more specialized site for scientific equipment). All you need is an Internet connection.

“Strict regulation doesn’t accomplish its goals,” Carlson said. “It’s not an exact analogy, but look at Prohibition. What happened when government restricted the production and sale of alcohol? Crime rose dramatically. It became organized and powerful. Legitimate manufacturers could not sell alcohol, but it was easy to make in a garage—or a warehouse.”

By 2002, the U.S. government intensified its effort to curtail the sale and production of methamphetamine. Previously, the drug had been manufactured in many mom-and-pop labs throughout the country. Today, production has been professionalized and centralized, and the Drug Enforcement Administration says that less is known about methamphetamine production than before. “The black market is getting blacker,” Carlson said. “Crystal-meth use is still rising, and all this despite restrictions.” Strict control would not necessarily insure the same fate for synthetic biology, but it might.

Bill Joy, a founder of Sun Microsystems, has frequently called for restrictions on the use of technology. “It is even possible that self-replication may be more fundamental than we thought, and hence harder—or even impossible—to control,” he wrote in an essay for Wired called “Why the Future Doesn’t Need Us.” “The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.”

Still, censoring the pursuit of knowledge has never really worked, in part because there are no parameters for society to decide who should have information and who should not. The opposite approach might give us better results: accelerate the development of technology and open it to more people and educate them to its purpose. Otherwise, if Carlson’s methamphetamine analogy proves accurate, power would flow directly into the hands of the people least likely to use it wisely.

For synthetic biology to accomplish any of its goals, we will also need an education system that encourages skepticism and the study of science. In 2007, students in Singapore, Japan, China, and Hong Kong (which was counted independently) all performed better on an international science exam than American students. The U.S. scores have remained essentially stagnant since 1995, the first year the exam was administered. Adults are even less scientifically literate. Early in 2009, the results of a California Academy of Sciences poll (conducted throughout the nation) revealed that only fifty-three per cent of American adults know how long it takes for the Earth to revolve around the sun, and a slightly larger number—fifty-nine per cent—are aware that dinosaurs and humans never lived at the same time.

Synthetic biologists will have to overcome this ignorance. Optimism prevails only when people are engaged and excited. Why should we bother? Not just to make E. coli smell like chewing gum or fish glow in vibrant colors. The planet is in danger, and nature needs help.

The hydrocarbons we burn for fuel are believed to be nothing more than concentrated sunlight that has been collected by leaves and trees. Organic matter rots, bacteria break it down, and it moves underground, where, after millions of years of pressure, it turns into oil and coal. At that point, we dig it up—at huge expense and with disastrous environmental consequences. Across the globe, on land and sea, we sink wells and lay pipe to ferry our energy to giant refineries. That has been the industrial model of development, and it worked for nearly two centuries. It won’t work any longer.

The industrial age is drawing to a close, eventually to be replaced by an era of biological engineering. That won’t happen easily (or quickly), and it will never solve every problem we expect it to solve. But what worked for artemisinin can work for many of the products our species will need to survive. “We are going to start doing the same thing that we do with our pets, with bacteria,” the genomic futurist Juan Enriquez has said, describing our transition from a world that relied on machines to one that relies on biology. “A house pet is a domesticated parasite,” he noted. “ It is evolved to have an interaction with human beings. Same thing with corn”—a crop that didn’t exist until we created it. “Same thing is going to start happening with energy,” he went on. “We are going to start domesticating bacteria to process stuff inside enclosed reactors to produce energy in a far more clean and efficient manner. This is just the beginning stage of being able to program life.”

sábado, 30 de agosto de 2014

Keep Thieves at Bay With a Motion-Activated Burglar Alarm

 

Top 10 projects that cost less than $3

 

Top 10 DIY Projects That Cost Less Than $3

 

Screenshot_34

Your source for Renewable Energy

 

Solar panels and Co.

 

Screenshot_33

 

Screenshot_32

Solarpod products

 

Solar panels

 

Solarpod™ Buddy

 

Screenshot_31

Self-driving car given UK test run at Oxford University

 

By Dave Lee Technology reporter, BBC News

A car that is able to drive itself on familiar routes has been shown off at an event at Oxford University.

The technology uses lasers and small cameras to memorise regular journeys like the commute or the school run.

The engineers and researchers behind the project are aiming to produce a low-cost system that "takes the strain" off drivers.

Other companies, such as Google, have also been testing driverless vehicle technology.

The search giant has pushed for law changes in California to allow its car to be tried out in real-life situations.

The Oxford RobotCar UK project is seeking to do the same in the UK, said Prof Paul Newman from Oxford University's department of engineering science.

"We're working with the Department of Transport to get some miles on the road in the UK," said Prof Newman, who is working alongside machine learning specialist Dr Ingmar Posner.

Gaining 'experiences'

Until the car can hit the streets, the team is testing it out in a specially-made environment at Begbroke Science Park in Oxfordshire.

Continue reading the main story

Analysis

image of Richard Westcott Richard Westcott BBC transport correspondent at Begbroke Science Park in Oxfordshire


Frankly, it is a bit disconcerting being driven around by a robotic chauffeur, but then I remember thinking the same thing when I first used cruise control on a motorway.

It's amazing how quickly you adjust to things. Within five minutes I'd got used to the wheel turning on its own, and I wasn't remotely concerned when someone walked out in front of us (it was a tightly controlled safety experiment before anyone emails in, and the car did stop in plenty of time).

Fully autonomous cars won't appear in showrooms overnight. But it seems inevitable we will be handing over more of the driving to computers as the years roll by, and this Oxford University system could well be the next step.

There are barriers of course. Makers will have to prove they are safe. Then they'll have to convince the public. And there's the sticky question of who's liable if there's a crash.

Still, most car crashes are down to the human at the wheel. Plenty of people believe robotic cars could save thousands of lives in the future.

"It's not like a racetrack - it's a light industrial site with roads and road markings," Prof Newman told the BBC.

"Crucial for us, it can show our navigation and control system working.

"It's not depending on GPS, digging up the roads or anything like that - it's just the vehicles knowing where they are because they recognise their surroundings."

The technology allows the car to "take over" when driving on routes it has already travelled.

"The key word for us is that the car gains 'experiences'," Prof Newman explained.

"The car is driven by a human, and it builds a 3D model of its environment."

When it goes on the same journey again, an iPad built into the dashboard gives a prompt to the driver - offering to let the computer "take the wheel".

"Touching the screen then switches to 'auto drive' where the robotic system takes over, Prof Newman added.

"At any time, a tap on the brake pedal will return control to the human driver."

Spinning lasers

At the moment, the complete system costs around £5,000 - but Prof Newman hopes that future models will bring the price of the technology down to as low as £100.

Autonomous technology is being tested by several car manufacturers and technology companies.

Simple self-driving tasks, such as cars that can park themselves, are already in use across the industry.

The Holy Grail is a fully-autonomous vehicle that is location-aware, safe and affordable.

An iPad display in the self-driving car

The iPad display tells the driver when the car is able to take over

Google has been testing its car for several years, with the company boasting of 300,000 computer-driven miles without an accident.

While at an earlier stage of development, Oxford University's car has significant key differences to Google's offering, Prof Newman said.

"Well if you look at it, we don't need a 3D laser spinning on the roof that's really expensive - so that's one thing straight away. I think our car has a lower profile."

He added: "Our approach is made possible because of advances in 3D laser mapping that enable an affordable car-based robotic system to rapidly build up a detailed picture of its surroundings.

"Because our cities don't change very quickly, robotic vehicles will know and look out for familiar structures as they pass by so that they can ask a human driver 'I know this route, do you want me to drive?'"

Prof Newman applauded Google's efforts in innovating in the space - but was buoyant about the role British expertise could have in the industry.

"This is all UK intellectual property, getting into the [driverless car] race.

"I would be astounded if we don't see this kind of technology in cars within 15 years. That is going to be huge."

 

Screenshot_30

The project on emerging nanotechnologies

 

Nanotechnology - Consumer products inventory

 

Screenshot_29

20 predictions for the next 25 years

 

From the web to wildlife, the economy to nanotechnology, politics to sport, the Observer's team of experts prophesy how the world will change – for good or bad – in the next quarter of a century

Shopping Crowd in Hangzhou

Customers crowd into a department store in Hangzhou, Zhejiang province. China will continue to rise in the coming decades. Photograph: Chinafotopress/Getty Images

1 Geopolitics: 'Rivals will take greater risks against the US'

No balance of power lasts forever. Just a century ago, London was the centre of the world. Britain bestrode the world like a colossus and only those with strong nerves (or weak judgment) dared challenge the Pax Britannica.

That, of course, is all history, but the Pax Americana that has taken shape since 1989 is just as vulnerable to historical change. In the 1910s, the rising power and wealth of Germany and America splintered the Pax Britannica; in the 2010s, east Asia will do the same to the Pax Americana.

The 21st century will see technological change on an astonishing scale. It may even transform what it means to be human. But in the short term – the next 20 years – the world will still be dominated by the doings of nation-states and the central issue will be the rise of the east.

By 2030, the world will be more complicated, divided between a broad American sphere of influence in Europe, the Middle East and south Asia, and a Chinese sphere in east Asia and Africa. Even within its own sphere, the US will face new challenges from former peripheries. The large, educated populations of Poland, Turkey, Brazil and their neighbours will come into their own and Russia will continue its revival.

Nevertheless, America will probably remain the world's major power. The critics who wrote off the US during the depression of the 1930s and the stagflation of the 1970s lived to see it bounce back to defeat the Nazis in the 1940s and the Soviets in the 1980s. America's financial problems will surely deepen through the 2010s, but the 2020s could bring another Roosevelt or Reagan.

A hundred years ago, as Britain's dominance eroded, rivals, particularly Germany, were emboldened to take ever-greater risks. The same will happen as American power erodes in the 2010s-20s. In 1999, for instance, Russia would never have dared attack a neighbour such as Georgia but in 2009 it took just such a chance.

The danger of such an adventure sparking a great power war in the 2010s is probably low; in the 2020s, it will be much greater.

The most serious threats will arise in the vortex of instability that stretches from Africa to central Asia. Most of the world's poorest people live here; climate change is wreaking its worst damage here; nuclear weapons are proliferating fastest here; and even in 2030, the great powers will still seek much of their energy here.

Here, the risk of Sino-American conflict will be greatest and here the balance of power will be decided.

Ian Morris, professor of history at Stanford University and the author of Why the West Rules – For Now (Profile Books)

2 The UK economy: 'The popular revolt against bankers will become impossible to resist'

A view of the Strata building across the city at dusk

A view across the City at dusk. Photograph: James Brittain

It will be a second financial crisis in the 2010s – probably sooner than later – that will prove to be the remaking of Britain. Confronted by a second trillion-pound bank bailout in less than 10 years, it will be impossible for the City and wider banking system to resist reform. The popular revolt against bankers, their current business model in which neglect of the real economy is embedded and the scale of their bonuses – all to be underwritten by bailouts from taxpayers – will become irresistible. The consequent rebalancing of the British economy, already underway, will intensify. Britain, in thrall to finance since 1945, will break free – spearheading a second Industrial Revolution.

In 2035, there is thus a good prospect that Britain will be the most populous (our birth rate will be one the highest in Europe), dynamic and richest European country, the key state in a reconfigured EU. Our leading universities will become powerhouses of innovation, world centres in exploiting the approaching avalanche of scientific and technological breakthroughs. A reformed financial system will allow British entrepreneurs to get the committed financial backing they need, becoming the capitalist leaders in Europe. And, after a century of trying, Britain will at last build itself a system for developing apprentices and technicians that is no longer the Cinderella of the education system.

It will not be plain sailing. Massive political turbulence in China and its conflict with the US will define part of the next 25 years – and there will be a period when the world trading and financial system retreats from openness.

How far beggar-my-neighbour competitive devaluations and protection will develop is hard to predict, but protectionist trends are there for all to see. Commodity prices will go much higher and there will be shortages of key minerals, energy, water and some basic foodstuffs.

The paradox is that this will be good news for Britain. It will force the state to re-engage with the economy and to build a matrix of institutions that will support innovation and investment, rather as it did between 1931 and 1950. New Labour began this process tremulously in its last year in office; the coalition government is following through. These will be lean years for the traditional Conservative right, but whether it will be a liberal One Nation Tory party, ongoing coalition governments or the Labour party that will be the political beneficiary is not yet sure.

The key point is that those 20 years in the middle of the 20th century witnessed great industrial creativity and an unsung economic renaissance until the country fell progressively under the stultifying grip of the City of London. My guess is that the same, against a similarly turbulent global background, is about to happen again. My caveat is if the City remains strong, in which case economic decline and social division will escalate.

Will Hutton, executive vice-chair of the Work Foundation and an Observer columnist

 

3 Global development: 'A vaccine will rid the world of Aids'

Within 25 years, the world will achieve many major successes in tackling the diseases of the poor.

Certainly, we will be polio-free and probably will have been for more than a decade. The fight to eradicate polio represents one of the greatest achievements in global health to date. It has mobilised millions of volunteers, staged mass immunisation campaigns and helped to strengthen the health systems of low-income countries. Today, we have eliminated 99% of the polio in the world and eradication is well within reach.

Vaccines that prevent diseases such as measles and rotavirus, currently available in rich countries, will also become affordable and readily available in developing countries. Since it was founded 10 years ago, the Gavi Alliance, a global partnership that funds expanded immunisation in poor countries, has helped prevent more than 5 million deaths. It is easy to imagine that in 25 years this work will have been expanded to save millions more lives by making life-saving vaccines available all over the world.

I also expect to see major strides in new areas. A rapid point-of-care diagnostic test – coupled with a faster-acting treatment regimen – will so fundamentally change the way we treat tuberculosis that we can begin planning an elimination campaign.

We will eradicate malaria, I believe, to the point where there are no human cases reported globally in 2035. We will also have effective means for preventing Aids infection, including a vaccine. With the encouraging results of the RV144 Aids vaccine trial in Thailand, we now know that an Aids vaccine is possible. We must build on these and promising results on other means of preventing HIV infection to help rid the world of the threat of Aids.

Tachi Yamada, president of the global health programme at the Bill & Melinda Gates Foundation

 

4 Energy: 'Returning to a world that relies on muscle power is not an option'

Providing sufficient food, water and energy to allow everyone to lead decent lives is an enormous challenge. Energy is a means, not an end, but a necessary means. With 6.7 billion people on the planet, more than 50% living in large conurbations, and these numbers expected to rise to more than 9 billion and 80% later in the century, returning to a world that relies on human and animal muscle power is not an option.

The challenge is to provide sufficient energy while reducing reliance on fossil fuels, which today supply 80% of our energy (in decreasing order of importance, the rest comes from burning biomass and waste, hydro, nuclear and, finally, other renewables, which together contribute less than 1%). Reducing use of fossil fuels is necessary both to avoid serious climate change and in anticipation of a time when scarcity makes them prohibitively expensive.

It will be extremely difficult. An International Energy Agency scenario that assumes the implementation of all agreed national policies and announced commitments to save energy and reduce the use of fossil fuels projects a 35% increase in energy consumption in the next 25 years, with fossil fuels up 24%. This is almost entirely due to consumption in developing countries where living standards are, happily, rising and the population is increasing rapidly.

This scenario, which assumes major increases in nuclear, hydro and wind power, evidently does not go far enough and will break down if, as many expect, oil production (which is assumed to increase 15%) peaks in much less than 25 years. We need to go much further in reducing demand, through better design and changes in lifestyles, increasing efficiency and improving and deploying all viable alternative energy sources. It won't be cheap. And in the post-fossil-fuel era it won't be sufficient without major contributions from solar energy (necessitating cost reductions and improved energy storage and transmission) and/or nuclear fission (meaning fast breeder and/or thorium reactors when uranium eventually becomes scarce) and/or fusion (which is enormously attractive in principle but won't become a reliable source of energy until at least the middle of the century).

Disappointingly, with the present rate of investment in developing and deploying new energy sources, the world will still be powered mainly by fossil fuels in 25 years and will not be prepared to do without them.

Chris Llewellyn Smith is a former director general of Cern and chair of Iter, the world fusion project, he works on energy issues at Oxford University

5 Advertising: 'All sorts of things will just be sold in plain packages'

Shinjuku districk of Tokyo

Advertising in Tokyo. Photograph: Mike Long / Alamy/Alamy

If I'd been writing this five years ago, it would have been all about technology: the internet, the fragmentation of media, mobile phones, social tools allowing consumers to regain power at the expense of corporations, all that sort of stuff. And all these things are important and will change how advertising works.

But it's becoming clear that what'll really change advertising will be how we relate to it and what we're prepared to let it do. After all, when you look at advertising from the past the basic techniques haven't changed; what seems startlingly alien are the attitudes it was acceptable to portray and the products you were allowed to advertise.

In 25 years, I bet there'll be many products we'll be allowed to buy but not see advertised – the things the government will decide we shouldn't be consuming because of their impact on healthcare costs or the environment but that they can't muster the political will to ban outright. So, we'll end up with all sorts of products in plain packaging with the product name in a generic typeface – as the government is currently discussing for cigarettes.

But it won't stop there. We'll also be nudged into renegotiating the relationship between society and advertising, because over the next few years we're going to be interrupted by advertising like never before. Video screens are getting so cheap and disposable that they'll be plastered everywhere we go. And they'll have enough intelligence and connectivity that they'll see our faces, do a quick search on Facebook to find out who we are and direct a message at us based on our purchasing history.

At least, that'll be the idea. It probably won't work very well and when it does work it'll probably drive us mad. Marketing geniuses are working on this stuff right now, but not all of them recognise that being allowed to do this kind of thing depends on societal consent – push the intrusion too far and people will push back.

Society once did a deal accepting advertising because it seemed occasionally useful and interesting and because it paid for lots of journalism and entertainment. It's not necessarily going to pay for those things for much longer so we might start questioning whether we want to live in a Blade Runner world brought to us by Cillit Bang.

Russell Davies, head of planning at the advertising agency Ogilvy and Mather and a columnist for the magazines Campaign and Wired

 

6 Neuroscience: 'We'll be able to plug information streams directly into the cortex'

By 2030, we are likely to have developed no-frills brain-machine interfaces, allowing the paralysed to dance in their thought-controlled exoskeleton suits. I sincerely hope we will not still be interfacing with computers via keyboards, one forlorn letter at a time.

I'd like to imagine we'll have robots to do our bidding. But I predicted that 20 years ago, when I was a sanguine boy leaving Star Wars, and the smartest robot we have now is the Roomba vacuum cleaner. So I won't be surprised if I'm wrong in another 25 years. Artificial intelligence has proved itself an unexpectedly difficult problem.

Maybe we will understand what's happening when we immerse our heads into the colourful night blender of dreams. We will have cracked the secret of human memory by realising that it was never about storing things, but about the relationships between things.

Will we have reached the singularity – the point at which computers surpass human intelligence and perhaps give us our comeuppance? We'll probably be able to plug information streams directly into the cortex for those who want it badly enough to risk the surgery. There will be smart drugs to enhance learning and memory and a flourishing black market among ambitious students to obtain them.

Having lain to rest the nature-nurture dichotomy at that point, we will have a molecular understanding of the way in which cultural narratives work their way into brain tissue and of individual susceptibility to those stories.

Then there's the mystery of consciousness. Will we finally have a framework that allows us to translate the mechanical pieces and parts into private, subjective experience? As it stands now, we don't even know what such a framework could look like ("carry the two here and that equals the experience of tasting cinnamon").

That line of research will lead us to confront the question of whether we can reproduce consciousness by replicating the exact structure of the brain – say, with zeros and ones, or beer cans and tennis balls. If this theory of materialism turns out to be correct, then we will be well on our way to downloading our brains into computers, allowing us to live forever in The Matrix.

But if materialism is incorrect, that would be equally interesting: perhaps brains are more like radios that receive an as-yet-undiscovered force. The one thing we can be sure of is this: no matter how wacky the predictions we make today, they will look tame in the strange light of the future.

David Eagleman, neuroscientist and writer

 

7 Physics: 'Within a decade, we'll know what dark matter is'

The next 25 years will see fundamental advances in our understanding of the underlying structure of matter and of the universe. At the moment, we have successful descriptions of both, but we have open questions. For example, why do particles of matter have mass and what is the dark matter that provides most of the matter in the universe?

I am optimistic that the answer to the mass question will be found within a few years, whether or not it is the mythical Higgs boson, and believe that the answer to the dark matter question will be found within a decade.

Key roles in answering these questions will be made by experiments at Cern's Large Hadron Collider, which started operations in earnest last year and is expected to run for most of the next 20 years; others will be played by astrophysical searches for dark matter and cosmological observations such as those from the European Space Agency's Planck satellite.

Many theoretical proposals for answering these questions invoke new principles in physics, such as the existence of additional dimensions of space or a "supersymmetry" between the constituents of matter and the forces between them, and we will discover whether these ideas are useful for physics. Both these ideas play roles in string theory, the best guess we have for a complete theory of all the fundamental forces including gravity.

Will string theory be pinned down within 20 years? My crystal ball is cloudy on this point, but I am sure that we physicists will have an exciting time trying to find out.

John Ellis, theoretical physicist at Cern and King's College London

8 Food: 'Russia will become a global food superpower'

20 predictions

A woman works on the production line of a poultry processing factory in Stary Oskol, central Russia. Photograph: Sasha Mordovets/Getty Images

When experts talk about the coming food security crisis, the date they fixate upon is 2030. By then, our numbers will be nudging 9 billion and we will need to be producing 50% more food than we are now.

By the middle of that decade, therefore, we will either all be starving, and fighting wars over resources, or our global food supply will have changed radically. The bitter reality is that it will probably be a mixture of both.

Developed countries such as the UK are likely, for the most part, to have attempted to pull up the drawbridge, increasing national production and reducing our reliance on imports.

In response to increasing prices, some of us may well have reduced our consumption of meat, the raising of which is a notoriously inefficient use of grain. This will probably create a food underclass, surviving on a carb- and fat-heavy diet, while those with money scarf the protein.

The developing world, meanwhile, will work to bridge the food gap by embracing the promise of biotechnology which the middle classes in the developed world will have assumed that they had the luxury to reject.

In truth, any of the imported grain that we do consume will come from genetically modified crops. As climate change lays waste to the productive fields of southern Europe and north Africa, more water-efficient strains of corn, wheat and barley will be pressed into service; likewise, to the north, Russia will become a global food superpower as the same climate change opens up the once frozen and massive Siberian prairie to food production.

The consensus now is that the planet does have the wherewithal to feed that huge number of people. It's just that some people in the west may find the methods used to do so unappetising.

Jay Rayner, TV presenter and the Observer's food critic

9 Nanotechnology: 'Privacy will be a quaint obsession'

Twenty years ago, Don Eigler, a scientist working for IBM in California, wrote out the logo of his employer in letters made of individual atoms. This feat was a graphic symbol of the potential of the new field of nanotechnology, which promises to rebuild matter atom by atom, molecule by molecule, and to give us unprecedented power over the material world.

Some, like the futurist Ray Kurzweil, predict that nanotechnology will lead to a revolution, allowing us to make any kind of product for virtually nothing; to have computers so powerful that they will surpass human intelligence; and to lead to a new kind of medicine on a sub-cellular level that will allow us to abolish ageing and death.

I don't think that Kurzweil's "technological singularity" – a dream of scientific transcendence that echoes older visions of religious apocalypse – will happen. Some stubborn physics stands between us and "the rapture of the nerds". But nanotechnology will lead to some genuinely transformative applications.

New ways of making solar cells very cheaply on a very large scale offer us the best hope we have for providing low-carbon energy on a big enough scale to satisfy the needs of a growing world population aspiring to the prosperity we're used to in the developed world.

We'll learn more about intervening in our biology at the sub-cellular level and this nano-medicine will give us new hope of overcoming really difficult and intractable diseases, such as Alzheimer's, that will increasingly afflict our population as it ages.

The information technology that drives your mobile phone or laptop is already operating at the nanoscale. Another 25 years of development will lead us to a new world of cheap and ubiquitous computing, in which privacy will be a quaint obsession of our grandparents.

Nanotechnology is a different type of science, respecting none of the conventional boundaries between disciplines and unashamedly focused on applications rather than fundamental understanding.

Given the huge resources being directed towards nanotechnology in China and its neighbours, this may also be the first major technology of the modern era that is predominantly developed outside the US and Europe.

Richard Jones, pro-vice-chancellor for research and innovation at the University of Sheffield

10 Gaming: 'We'll play games to solve problems'

In the last decade, in the US and Europe but particularly in south-east Asia, we have witnessed a flight into virtual worlds, with people playing games such as Second Life. But over the course of the next 25 years, that flight will be successfully reversed, not because we're going to spend less time playing games, but because games and virtual worlds are going to become more closely connected to reality.

There will be games where the action is influenced by what happens in reality; and there will be games that use sensors so that we can play them out in the real world – a game in which your avatar is your dog, which wears a game collar that measures how fast it's running and whether or not it's wagging its tail, for example, where you play with your dog to advance the narrative, as opposed to playing with a virtual character. I can imagine more physical activity games, too, and these might be used to harness energy – peripherals like a dance pad that actually captures energy from your dancing on top of it.

Then there will be problem-solving games: there are already a lot of games in which scientists try to teach gamers real science – how to build proteins to cure cancer, for example. One surprising trend in gaming is that gamers today prefer, on average, three to one to play co-operative games rather than competitive games. Now, this is really interesting; if you think about the history of games, there really weren't co-operative games until this latest generation of video games. In every game you can think of – card games, chess, sport – everybody plays to win. But now we'll see increasing collaboration, people playing games together to solve problems while they're enjoying themselves.

There are also studies on how games work on our minds and our cognitive capabilities, and a lot of science suggests you can use games to treat depression, anxiety and attention-deficit disorder. Making games that are both fun and serve a social purpose isn't easy – a lot of innovation will be required – but gaming will become increasingly integrated into society.

Jane McGonigal, director of games research & development at the Institute for the Future in California and author of Reality Is Broken: Why Games Make Us Happy and How They Can Help Us Change the World (Penguin)

11 Web/internet: 'Quantum computing is the future'

The open web created by idealist geeks, hippies and academics, who believed in the free and generative flow of knowledge, is being overrun by a web that is safer, more controlled and commercial, created by problem-solving pragmatists.

Henry Ford worked out how to make money by making products people wanted to own and buy for themselves. Mark Zuckerberg and Steve Jobs are working out how to make money from allowing people to share, on their terms.

Facebook and Apple are spawning cloud capitalism, in which consumers allow companies to manage information, media, ideas, money, software, tools and preferences on their behalf, holding everything in vast, floating clouds of shared data. We will be invited to trade invasions into our privacy – companies knowing ever more about our lives – for a more personalised service. We will be able to share, but on their terms.

Julian Assange and the movement that has been ignited by WikiLeaks is the most radical version of the alternative: a free, egalitarian, open and public web. The fate of this movement will be a sign of things to come. If it can command broad support, then the open web has a chance to remain a mainstream force. If, however, it becomes little more than a guerrilla campaign, then the open web could be pushed to the margins, along with national public radio.

By 2035, the web, as a single space largely made up of webpages accessed on computers, will be long gone.

As the web goes mobile, those who pay more will get faster access. We will be sharing videos, simulations, experiences and environments, on a multiplicity of devices to which we'll pay as much attention as a light switch.

Yet, many of the big changes of the next 25 years will come from unknowns working in their bedrooms and garages. And by 2035 we will be talking about the coming of quantum computing, which will take us beyond the world of binary, digital computing, on and off, black and white, 0s and 1s.

The small town of Waterloo, Ontario, which is home to the Perimeter Institute, funded by the founder of BlackBerry, currently houses the largest collection of theoretical physicists in the world.

The bedrooms of Waterloo are where the next web may well be made.

Charles Leadbeater, author and social entrepreneur

12 Fashion: 'Technology creates smarter clothes'

Gareth Pugh show

A model on the catwalk during the Gareth Pugh show at London Fashion Week in 2008. Photograph: Leon Neal/AFP/Getty Images

Fashion is such an important part of the way in which we communicate our identity to others, and for a very long time it's meant dress: the textile garments on our body. But in the coming decades, I think there'll be much more emphasis on other manifestations of fashion and different ways of communicating with each other, different ways of creating a sense of belonging and of making us feel great about ourselves.

We're already designing our identities online – manipulating imagery to tell a story about ourselves. Instead of meeting in the street or in a bar and having a conversation and looking at what each other is wearing, we're communicating in some depth through these new channels. With clothing, I think it's possible that we'll see a polarisation between items that are very practical and those that are very much about display – and maybe these are not things that you own but that you borrow or share.

Technology is already being used to create clothing that fits better and is smarter; it is able to transmit a degree of information back to you. This is partly driven by customer demand and the desire to know where clothing comes from – so we'll see tags on garments that tell you where every part of it was made, and some of this, I suspect, will be legislation-driven, too, for similar reasons, particularly as resources become scarcer and it becomes increasingly important to recognise water and carbon footprints.

However, it's not simply an issue of functionality. Fashion's gone through a big cycle in the last 25 years – from being something that was treasured and cherished to being something that felt disposable, because of a drop in prices. In fact, we've completely changed our relationship towards clothes and there's a real feeling among designers who I work with that they're trying to work back into their designs an element of emotional content.

I think there's definitely a place for technology in creating a dialogue with you through your clothes.

Dilys Williams, designer and the director for sustainable fashion at the London College of Fashion

13 Nature: 'We'll redefine the wild'

We all want to live in a world where species such as tigers, the great whales, orchids and coral reefs can persist and thrive and I am sure that the commitment that people have to maintaining the spectacle and diversity of life will continue. Over the past 50 years or so, there has been growing support for nature conservation. When we understand the causes of species losses, good conservation actions can and do reverse the trends.

But it is going to become much harder. The human population has roughly doubled since the 1960s and will increase by another third by 2030. Demands for food, water and energy will increase, inevitably in competition with other species. People already use up to 40% of the world's primary production (energy) and this must increase, with important consequences for nature.

In the UK, some familiar species will become scarcer as our rare habitats (mires, bogs and moorlands) are lost. We will be seeing the effects from gradual warming that will allow more continental species to live here, and in our towns and cities we'll probably have more species that have become adapted to living alongside people.

We can conserve species when we really try, so I'm confident that the charismatic mega fauna and flora will mostly still persist in 2035, but they will be increasingly restricted to highly managed and protected areas. The survivors will be those that cope well with people and those we care about enough to save. Increasingly, we won't be living as a part of nature but alongside it, and we'll have redefined what we mean by the wild and wilderness.

Crucially, we are still rapidly losing overall biodiversity, including soil micro-organisms, plankton in the oceans, pollinators and the remaining tropical and temperate forests. These underpin productive soils, clean water, climate regulation and disease-resistance. We take these vital services from biodiversity and ecosystems for granted, treat them recklessly and don't include them in any kind of national accounting.

Georgina Mace, professor of conservation science and director of the Natural Environment Research Council's Centre for Population Biology, Imperial College London

14 Architecture: What constitutes a 'city' will change

In 2035, most of humanity will live in favelas. This will not be entirely wonderful, as many people will live in very poor housing, but it will have its good side. It will mean that cities will consist of series of small units organised, at best, by the people who know what is best for themselves and, at worst, by local crime bosses.

Cities will be too big and complex for any single power to understand and manage them. They already are, in fact. The word "city" will lose some of its meaning: it will make less and less sense to describe agglomerations of tens of millions of people as if they were one place, with one identity. If current dreams of urban agriculture come true, the distinction between town and country will blur. Attempts at control won't be abandoned, however, meaning that strange bubbles of luxury will appear, like shopping malls and office parks. To be optimistic, the human genius for inventing social structures will mean that new forms of settlement we can't quite imagine will begin to emerge.

All this assumes that environmental catastrophe doesn't drive us into caves. Nor does it describe what will happen in Britain, with a roughly stable population and a planning policy dedicated to preserving the status quo as much as possible. Britain in 25 years' time may look much as it does now, which is not hugely different from 25 years ago. Rowan Moore, Observer architecture correspondent

15 Sport: 'Broadcasts will use holograms'

Globalisation in sport will continue: it's a trend we've seen by the choice of Rio for the 2016 Olympics and Qatar for the 2022 World Cup. This will mean changes to traditional sporting calendars in recognition of the demands of climate and time zones across the planet.

Sport will have to respond to new technologies, the speed at which we process information and apparent reductions in attention span. Shorter formats, such as Twenty20 cricket and rugby sevens, could aid the development of traditional sports in new territories.

The demands of TV will grow, as will technology's role in umpiring and consuming sport. Electronics companies are already planning broadcasts using live holograms. I don't think we'll see an acceptance of performance-enhancing drugs: the trend has been towards zero tolerance and long may it remain so.

Mike Lee, chairman of Vero Communications and ex-director of communications for London's 2012 Olympic bid

16 Transport: 'There will be more automated cars'

It's not difficult to predict how our transport infrastructure will look in 25 years' time – it can take decades to construct a high-speed rail line or a motorway, so we know now what's in store. But there will be radical changes in how we think about transport. The technology of information and communication networks is changing rapidly and internet and mobile developments are helping make our journeys more seamless. Queues at St Pancras station or Heathrow airport when the infrastructure can't cope for whatever reason should become a thing of the past, but these challenges, while they might appear trivial, are significant because it's not easy to organise large-scale information systems.

The instinct to travel is innate within us, but we will have to do it in a more carbon-efficient way. It's hard to be precise, but I think we'll be cycling and walking more; in crowded urban areas we may see travelators – which we see in airports already – and more scooters. There will be more automated cars, like the ones Google has recently been testing. These driverless cars will be safer, but when accidents do happen, they may be on the scale of airline disasters. Personal jetpacks will, I think, remain a niche choice.

Frank Kelly, professor of the mathematics of systems at Cambridge University, and former chief scientific adviser to the DfT

17 Health: 'We'll feel less healthy'

overweight woman

An overweight woman in Maryland, USA. Photograph: Tim Sloan/AFP/Getty Images

Health systems are generally quite conservative. That's why the more radical forecasts of the recent past haven't quite materialised. Contrary to past predictions, we don't carry smart cards packed with health data; most treatments aren't genetically tailored; and health tourism to Bangalore remains low. But for all that, health is set to undergo a slow but steady revolution. Life expectancy is rising about three months each year, but we'll feel less healthy, partly because we'll be more aware of the many things that are, or could be, going wrong, and partly because more of us will be living with a long-term condition.

We'll spend more on health but also want stronger action to influence health. The US Congressional Budget Office forecasts that US health spending will rise from 17% of the economy today to 25% in 2025 and 49% in 2082. Their forecasts may be designed to shock but they contain an important grain of truth. Spending on health and jobs in health is bound to grow.

Some of that spending will go on the problems of prosperity – obesity, alcohol consumption and injuries from extreme sports. Currently fashionable ideas of "nudge" will have turned out to be far too weak to change behaviours. Instead, we'll be more in the realms of "shove" and "push", with cities trying to reshape whole environments to encourage people to walk and cycle.

By 2030, mental health may at last be treated on a par with physical health. Medicine may have found smart drugs for some conditions but the biggest impact may be achieved from lower-tech actions, such as meditation in schools or brain gyms for pensioners.

Healthcare will look more like education. Your GP will prescribe you a short course on managing your diabetes or heart condition, and when you get home there'll be an e-tutor to help you and a vast array of information about your condition.

Almost every serious observer of health systems believes that the great general hospitals are already anachronistic, but because hospitals are where so much of the power lies, and so much of the public attachment, it would be a brave forecaster who suggested their imminent demise.

Geoff Mulgan, chief executive of the Young Foundation

18 Religion: 'Secularists will flatter to deceive'

Over the next two and a half decades, it is quite possible that those Brits who follow a religion will continue both to fall in number and also become more orthodox or fundamentalist. Similarly, organised religions will increasingly work together to counter what they see as greater threats to their interests – creeping agnosticism and secularity.

Another 10 years of failure by the Anglican church to face down the African-led traditionalists over women bishops and gay clerics could open the question of disestablishment of the Church of England. The country's politicians, including an increasingly gay-friendly Tory party, may find it difficult to see how state institutions can continue to be associated with an image of sexism and homophobia.

I predict an increase in debate around the tension between a secular agenda which says it is merely seeking to remove religious privilege, end discrimination and separate church and state, and organised orthodox religion which counterclaims that this would amount to driving religious voices from the public square.

Despite two of the three party leaders being professed atheists, the secular tendency in this country still flatters to deceive. There is, at present, no organised, non-religious, rationalist movement. In contrast, the forces of organised religion are better resourced, more organised and more politically influential than ever before.

Dr Evan Harris, author of a secularist manifesto

19 Theatre: 'Cuts could force a new political fringe'

The theatre will weather the recent cuts. Some companies will close and the repertoire of others will be safe and cautious; the art form will emerge robust in a decade or so. The cuts may force more young people outside the existing structures back to an unsubsidised fringe and this may breed different types of work that will challenge the subsidised sector.

Student marches will become more frequent and this mobilisation may breed a more politicised generation of theatre artists. We will see old forms from the 1960s re-emerge (like agit prop) and new forms will be generated to communicate ideology and politics.

More women will emerge as directors, writers and producers. This change is already visible at the flagship subsidised house, the National Theatre, where the repertoire for bigger theatres like the Lyttelton already includes directors like Marianne Elliott and Josie Rourke, and soon the Cottesloe will start to embrace the younger generation – Polly Findlay and Lyndsey Turner.

Katie Mitchell, theatre director

20 Storytelling: 'Eventually there'll be a Twitter classic'

Are you reading fewer books? I am and reading books is sort of my job. It's just that with the multifarious delights of the internet, spending 20 hours in the company of one writer and one story needs motivation. It's worth doing, of course; like exercise, its benefits are many and its pleasures great. And yet everyone I know is doing it less. And I can't see that that trend will reverse.

That's the bad news. Twenty-five years from now, we'll be reading fewer books for pleasure. But authors shouldn't fret too much; e-readers will make it easier to impulse-buy books at 4am even if we never read past the first 100 pages.

And stories aren't becoming less popular – they're everywhere, from adverts to webcomics to fictional tweets – we're only beginning to explore the exciting possibilities of web-native literature, stories that really exploit the fractal, hypertextual way we use the internet.

My guess is that, in 2035, stories will be ubiquitous. There'll be a tube-based soap opera to tune your iPod to during your commute, a tale (incorporating on-sale brands) to enjoy via augmented reality in the supermarket. Your employer will bribe you with stories to focus on your job.

Most won't be great, but then most of everything isn't great – and eventually there'll be a Twitter-based classic.

Naomi Alderman, novelist and games writer

Sign up for the Society briefing email

Stay on top of the latest policy announcements, legislation and keep ahead of current thinking.

Sign up for the Society briefing email