sexta-feira, 2 de maio de 2014

Stimulated mutual annihilation: How to make a gamma-ray laser with positronium

 

May 1, 2014

Joint Quantum Institute

Theorists expect that positronium, a sort of 'atom' consisting of an electron and an anti-electron, can be used to make a powerful gamma-ray laser. Scientists now report detailed calculations of the dynamics of a positronium BEC. This work is the first to account for effects of collisions between different positronium species. These collisions put important constraints on gamma-ray laser operation.

 

Schematic of the stimulated annihilation process in the positronium gamma-ray laser. Time sequence of frames, running top to bottom, suggests how some "seed photons" from spontaneous annihilation of a few Ps atoms will stimulate subsequent Ps annihilations, resulting in a pulse of 511 keV gamma rays

Twenty years ago, Philip Platzman and Allen Mills, Jr. at Bell Laboratories proposed that a gamma-ray laser could be made from a Bose-Einstein condensate (BEC) of positronium, the simplest atom made of both matter and antimatter (1). That was a year before a BEC of any kind of atom was available in any laboratory. Today, BECs have been made of 13 different elements, four of which are available in laboratories of the Joint Quantum Institute (JQI) (2), and JQI theorists have turned their attention to prospects for a positronium gamma-ray laser.

In a study published this week in Physical Review A (3), they report detailed calculations of the dynamics of a positronium BEC. This work is the first to account for effects of collisions between different positronium species. These collisions put important constraints on gamma-ray laser operation.

The World's Favorite Antimatter

Discovered in 1933, antimatter is a deep, pervasive feature of the world of elementary particles, and it has a growing number of applications. For example, about 2 million positron emission tomography (PET) medical imaging scans are performed in the USA each year. PET employs the antiparticle of the electron, the positron, that is emitted by radioactive elements which can be attached to biologically active molecules that target specific sites of the body. When a positron is emitted, it quickly binds to an electron in the surrounding medium, forming a positronium atom (denoted Ps).

Within a microsecond, the Ps atom will spontaneously self-annihilate at a random time, turning all of its mass into pure energy as described by Einstein's famous equation, E = mc2. This energy usually comes in the form of two gamma rays with energies of 511 kiloelectronvolts (keV), a highly penetrating form of radiation to which the human body is transparent. Platzman's and Mills' gamma-ray laser proposal involves generating coherent emission of these 511 keV photons by persuading a large number of Ps atoms to commit suicide at the same time, thus generating an intense gamma-ray pulse.

The Simplest Matter-Antimatter Atom

Ps lives less than a microsecond after it is formed, but that is a long enough for it to demonstrate the distinctive properties of an atom. It is bound by the electric attraction between positron and electron, just as the hydrogen atom is bound by the attraction between the proton and the electron. Scientists have worked out the spatial distribution of electron and positron density in Ps as given by the solution of the Schroedinger equation. The sharp central cusp is the place where the electron and positron meet, and annihilate. The electron density there is about four times the average density of the conduction electrons in copper wire.

Stimulated Annihilation

The electron and positron each has an intrinsic spin of ½ (in units of the reduced Planck constant). Thus, according to quantum mechanics, Ps can have a spin of 0 or 1. This turns out to be a critical element of the gamma-ray laser scheme. The 511 keV photons are only emitted by the spin-0 states of Ps, and this takes place within about 0.1 nanosecond after a spin-0 state is formed. The spin-1 states, on the other hand, last for about 0.1 microsecond, and only decay by emission of three gamma rays (for reasons of symmetry).

What we will get here is a process, stimulated annihilation, that is analogous to the stimulated radiation process at the heart of laser operation. Thus, when a pulsed beam of positrons is directed into a material, a random assortment of Ps states is created; the spin-0 states annihilate within the nanosecond, and the spin-1 states live for another nanosecond. During this latter time, the spin-1 states serve as an energy storage medium for the gamma ray laser: if they are switched into spin-0 states, which constitutes the active gain medium that generates the fast pulse of 511 keV gamma rays. Part of the JQI theorists' work involves the modelling of the most likely switch for this process, a pulse of far-infrared radiation. They find several switching sequences that approach the optimal condition of switching all spin-1 states to spin-0 states in a time short compared to the annihilation lifetime.

Positronium Sweetspot

Platzman and Mills pointed out that the Bose-Einstein condensate is a form of "enabling technology" for the Ps gamma-ray laser. This is because its low temperature and high phase-space density make coherent stimulated emission possible: in an ordinary thermal gas of Ps, the Doppler shifts of the atoms would suppress lasing action. This introduces another degree of complexity, which is explored in detail for the first time by the JQI team. A Ps BEC will only form when a threshold density of Ps is attained. That density depends upon the temperature of the Ps, but it is likely to be in the range of 1018 Ps atoms per cubic centimeter, which is about 3% of the density of ordinary air. At that density, collisions between Ps atoms occur frequently, and state-changing collisions are of particular concern. On the one hand, two spin-1 Ps atoms can collide to form two spin-0 atoms; this process limits the density of the energy storage medium. On the other hand, two spin-0 Ps atoms can collide to form two spin-1 atoms; this process limits the density of the active gain medium. Using first-principles quantum theory, the JQI team has explored the time evolution of a Ps BEC containing various mixtures of spin-0 and spin-1 Ps atoms, and has found that there is a critical density of Ps, above which collision processes quickly destroy the internal coherence of the gas.

The main conclusion of the JQI work is that the critical density is greater than the threshold density, so that there is a "sweet spot" for further development of a Ps gamma-ray laser. Dr. David B. Cassidy, a positronium experimentalist at University College London, who was not one of the authors of the new paper, summarizes it thusly:

"The idea to try and make a Ps BEC, and from this an annihilation laser, has been around for a long time, but nobody has really thought about the details of how a dense Ps BEC would actually behave, until now. This work neatly shows that the simple expectation that increasing the Ps density in a BEC would increase the amount of stimulated annihilation is wrong! Although we are some years away from trying to do this experimentally, when we do eventually get there the calculations in this paper will certainly help us to design a better experiment."

Notes:

(1) "Possibilities for Bose condensation of positronium," P. M. Platzman and A.P. Mills, Jr., Physical Review B vol. 49, p. 454 (1994)

(2) The Joint Quantum Institute is operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park.

Nanoelectronics: Edgy look at 2-D molybdenum disulfide

 

A new SHG imaging technique allows rapid and all-optical determination of the crystal orientations of 2D semiconductor membranes at a large scale, providing the knowledge needed to use these materials in nanoelectronic devices.

The drive to develop ultrasmall and ultrafast electronic devices using a single atomic layer of semiconductors, such as transition metal dichalcogenides, has received a significant boost. Researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) have recorded the first observations of a strong nonlinear optical resonance along the edges of a single layer of molybdenum disulfide. The existence of these edge states is key to the use of molybdenum disulfide in nanoelectronics, as well as a catalyst for the hydrogen evolution reaction in fuel cells, desulfurization and other chemical reactions.

"We observed strong nonlinear optical resonances at the edges of a two-dimensional crystal of molybdenum disulfide" says Xiang Zhang, a faculty scientist with Berkeley Lab's Materials Sciences Division who led this study. "These one-dimensional edge states are the result of electronic structure changes and may enable novel nanoelectronics and photonic devices. These edges have also long been suspected to be the active sites for the electrocatalytic hydrogen evolution reaction in energy applications. We also discovered extraordinary second harmonic light generation properties that may be used for the in situ monitoring of electronic changes and chemical reactions that occur at the one-dimensional atomic edges."

Zhang, who also holds the Ernest S. Kuh Endowed Chair Professor at the University of California (UC) Berkeley and directs the National Science Foundation's Nano-scale Science and Engineering Center, is the corresponding author of a paper in Science describing this research. The paper is titled "Edge Nonlinear Optics on a MoS2 Atomic Monolayer." Co-authors are Xiaobo Yin, Ziliang Ye, Daniel Chenet, Yu Ye, Kevin O'Brien and James Hone.

Emerging two-dimensional semiconductors are prized in the electronics industry for their superior energy efficiency and capacity to carry much higher current densities than silicon. Only a single molecule thick, they are well-suited for integrated optoelectronic devices. Until recently, graphene has been the unchallenged superstar of 2D materials, but today there is considerable attention focused on 2D semiconducting crystals that consist of a single layer of transition metal atoms, such as molybdenum, tungsten or niobium, sandwiched between two layers of chalcogen atoms, such as sulfur or selenium. Featuring the same flat hexagonal "honeycombed" structure as graphene and many of the same electrical advantages, these transition metal dichalcogenides, unlike graphene, have direct energy bandgaps. This facilitates their application in transistors and other electronic devices, particularly light-emitting diodes.

Full realization of the vast potential of transition metal dichalcogenides will only come with a better understanding of the domain orientations of their crystal structures that give rise to their exceptional properties. Until now, however, experimental imaging of these three-atom-thick structures and their edges have been limited to scanning tunneling microscopy and transmission electron microscopy, technologies that are often difficult to use. Nonlinear optics at the crystal edges and boundaries enabled Zhang and his collaborators to develop a new imaging technique based on second-harmonic generation (SHG) light emissions that can easily capture the crystal structures and grain orientations with an optical microscope.

"Our nonlinear optical imaging technique is a non-invasive, fast, easy metrologic approach to the study of 2D atomic materials," says Xiaobo Yin, the lead author of the Science paper and a former member of Zhang's research group who is now on the faculty at the University of Colorado, Boulder. "We don't need to prepare the sample on any special substrate or vacuum environment, and the measurement won't perturb the sample during the imaging process. This advantage allows for in-situ measurements under many practical conditions. Furthermore, our imaging technique is an ultrafast measurement that can provide critical dynamic information, and its instrumentation is far less complicated and less expensive compared with scanning tunneling microscopy and transmission electron microscopy."

For the SHG imaging of molybdenum disulfide, Zhang and his collaborators illuminated sample membranes that are only three atoms thick with ultrafast pulses of infrared light. The nonlinear optical properties of the samples yielded a strong SHG response in the form of visible light that is both tunable and coherent. The resulting SHG-generated images enabled the researchers to detect "structural discontinuities" or edges along the 2D crystals only a few atoms wide where the translational symmetry of the crystal was broken.

"By analyzing the polarized components of the SHG signals, we were able to map the crystal orientation of the molybdenum disulfide atomic membrane," says Ziliang Ye, the co-lead author of the paper and current member of Zhang's research group. "This allowed us to capture a complete map of the crystal grain structures, color-coded according to crystal orientation. We now have a real-time, non-invasive tool that allows us explore the structural, optical, and electronic properties of 2D atomic layers of transition metal dichalcogenides over a large area."

This research was supported by the DOE Office of Science through the Energy Frontier Research Center program, and by the U.S. Air Force Office of Scientific Research Multidisciplinary University Research Initiative.

Smart Wind and Solar Power

 

Smart Forecasts Lower the Power of Wind and Solar - MIT Technology Review 2014-05-02 05-31-18

Big data and artificial intelligence are producing ultra-accurate forecasts that will make it feasible to integrate much more renewable energy into the grid.

Breakthrough

Ultra-accurate ­forecasting of wind and solar power.

Why It Matters

Dealing with the intermittency of renewable energy will be crucial for its expansion.

Key Players
  • Xcel Energy
  • GE Power
  • National Center for Atmospheric Research

Wind power is booming on the open plains of eastern Colorado. Travel seven miles north of the town of Limon on Highway 71 and then head east on County Road 3p, a swath of dusty gravel running alongside new power lines: within minutes you’ll be surrounded by towering wind turbines in rows stretching for miles. Three large wind farms have been built in the area since 2011. A new one is going up this year.

Every few seconds, almost every one of the hundreds of turbines records the wind speed and its own power output. Every five minutes they dispatch data to high-performance computers 100 miles away at the National Center for Atmospheric Research (NCAR) in Boulder. There artificial-intelligence-based software crunches the numbers, along with data from weather satellites, weather stations, and other wind farms in the state. The result: wind power forecasts of unprecedented accuracy that are making it possible for Colorado to use far more renewable energy, at lower cost, than utilities ever thought possible.

The amount of wind power has more than doubled since 2009.

The forecasts are helping power companies deal with one of the biggest challenges of wind power: its intermittency. Using small amounts of wind power is no problem for utilities. They are accustomed to dealing with variability—after all, demand for electricity changes from season to season, even from minute to minute. However, a utility that wants to use a lot of wind power needs backup power to protect against a sudden loss of wind. These backup plants, which typically burn fossil fuels, are expensive and dirty. But with more accurate forecasts, utilities can cut the amount of power that needs to be held in reserve, minimizing their role.

Before the forecasts were developed, Xcel Energy, which supplies much of Colorado’s power, ran ads opposing a proposal that it use renewable sources for a modest 10 percent of its power. It mailed flyers to its customers claiming that such a mandate would increase electricity costs by as much as $1.5 billion over 20 years.

But thanks in large part to the improved forecasts, Xcel, one of the country’s largest utilities, has made an about-face.

It has installed more wind power than any other U.S. utility and supports a mandate for utilities to get 30 percent of their energy from renewable sources, saying it can easily handle much more than that.

Solar power generation lags wind power production by about a decade.

An early version of NCAR’s forecasting system was released in 2009, but last year was a breakthrough year—accuracy improved significantly, and the forecasts saved Xcel nearly as much money as they had in the three previous years combined. This year NCAR is testing a similar forecasting system for solar power.

Mining these detailed forecasts to develop a more flexible and efficient electricity system could make it much cheaper to hit ambitious international goals for reducing carbon emissions, says Bryan Hannegan, director of a $135 million facility at the National Renewable Energy Laboratory (NREL) in Golden, Colorado, that uses supercomputer simulations to develop ways to scale up renewable power. “We’ve got a line of sight to where we want to go in the long term with our energy and environment goals,” he says. “That’s not something we’ve been able to say before.”

Chasing the Wind

No one is more aware of the challenges of integrating wind power into the grid than Dayton Jones, a power plant dispatcher for Xcel Energy. From his perch on the 10th floor of the Xcel building in downtown Denver, he’s responsible for keeping the lights on in Colorado. Doing so requires matching power production to electricity demand by turning power plants on and off and controlling their output. Generating too much or too little power can damage electrical appliances or even plunge the grid into a blackout. Wind power, with its sharp fluctuations, makes his job harder.

Running backup fossil-fuel plants means “throwing carbon up into the sky”: “It costs money, and it’s bad for the environment.”

A few years ago, dispatchers like Jones couldn’t trust forecasts of how much wind power would be available to the grid at a given time. Those forecasts were typically off by 20 percent, and sometimes wind power completely failed to materialize when predicted. The solution was to have fossil-fuel plants idling, ready to replace all of that wind power in a few minutes. This approach is expensive, and the more the system is intended to rely on wind power, the more expensive it gets. What’s more, running the backup fossil-fuel plants means you’re “throwing carbon up into the sky,” says William Mahoney, deputy director of the Research Applications Laboratory at NCAR. “It costs money, and it’s bad for the environment.”

Actual power output (green line) is overlaid on a three-day wind power forecast (red line). The larger the yellow shaded area, the more uncertain the forecast.

NCAR’s forecasts give Jones enough confidence in wind power to shut down many of the idling backup plants. The number varies depending on the certainty of the forecast. If the weather is cold and wet and there’s a chance ice could form on wind turbines and slow them down or stop them from spinning, he might need enough fossil-fuel backup to completely replace his wind power.

But on nice days with steady, abundant wind, he might shut down all his fast-response backup plants, even those normally reserved for responding to changes in demand. Under such circumstances, Jones can use the wind farms themselves to ensure that power supply matches demand: the output of a wind turbine can be changed almost instantly by angling the blades so they capture more or less wind. Computers at Xcel’s building in Denver tell wind farms how much power to produce, and automated controls coördinate hundreds of turbines, changing output minute by minute if needed.

Xcel’s original forecasts used data from just one or two weather stations per wind farm. Now NCAR collects information from nearly every wind turbine. The data feeds into a high-resolution weather model and is combined with the output from five additional wind forecasts. Using historical data, NCAR’s software learns which forecasts are best for each wind farm and assigns different weights to each accordingly. The resulting über-forecast is more accurate than any of the original ones. Then, using data about how much power each turbine in the field will generate in response to different wind speeds, NCAR tells Xcel how much power to expect, in 15-minute increments, for up to seven days.

Forecasting solar power is next for NCAR and Xcel, but that can be even trickier than wind. For one thing, Xcel doesn’t get information about how much power private rooftop solar panels are generating, so it doesn’t know how much of that power it could lose when clouds roll in. NCAR’s new solar forecasts will use data from satellites, sky imagers, pollution monitors, and publicly owned solar panels to infer how much solar power is being generated and then predict how that amount will change.

Virtual Energy

How might extremely accurate wind and solar forecasts help us use enough renewable energy to reach climate goals of significantly reducing carbon dioxide emissions? Researchers at NREL’s new Energy Systems Integration Facility start by looking at how well wind and solar power can offset each other. To what extent, for example, can wind blowing at night make up for the lack of sunshine? But they are also looking at how to couple forecasts with smart dishwashers, water heaters, solar-panel inverters, water treatment plants, and electric-car chargers, not only to accommodate shifts in the wind but to ride out inevitable windless periods and weeks of cloudy weather without resorting to fossil fuels.

The red line—the result of subtracting wind power supply (blue) from demand (black)—shows the amount of power Xcel needs to generate with its fossil-fuel plants. The lighter lines are forecasts.

Take the example of electric cars. A car stores enough electricity to power a house for anywhere from half a day to several days, depending on the size of the battery pack. And it has sophisticated power electronics that can control the timing and vary the rate of charging, which could offer a way to match fluctuating wind power to electricity demand. With small modifications, the cars’ batteries can deliver stored power to a home and to the power grid. There aren’t many electric cars now, but that could easily change in the decades it will take before renewable energy makes up more than 30 or 40 percent of the electricity supply (wind supplies 4 percent now, and solar less than 1 percent).

At NREL, researchers can plug 30 electric cars into docks that let them interface with power-grid simulations on a supercomputer, to project what would happen if thousands of cars were connected to the grid. The idea is that electric cars might store power from solar panels and use it to power neighborhoods when electricity demand peaks in the evening, and then recharge their batteries using wind power in the early morning hours.

Forecasts like the ones being developed at NCAR will be “absolutely critical,” says Bri-Mathias Hodge, a senior research engineer at NREL. They will help determine when the cars’ batteries should charge to maximize the electricity they make available to the grid without leaving drivers short of the power they need.

Even before that becomes a reality, though, forecasts from NCAR are already having a big effect. Last year, on a windy weekend when power demand was low, Xcel set a record: during one hour, 60 percent of its electricity for Colorado was coming from the wind. “That kind of wind penetration would have given dispatchers a heart attack a few years ago,” says Drake Bartlett, who heads renewable-energy integration for Xcel. Back then, he notes, they wouldn’t have known whether they might suddenly lose all that power. “Now we’re taking it in stride,” he says. “And that record is going to fall.”

Kevin Bullis

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

Agile Robots

 

Agile, Walking Robots Could Move Around Human Environments with Ease - MIT Technology Review 2014-05-02 05-23-27

Computer scientists have created machines that have the balance and agility to walk and run across rough and uneven terrain, making them far more useful in navigating human environments.

Breakthrough

Legged machines that stride over uneven or unsteady terrain.

Why It Matters

Much of the world is inaccessible to wheeled machines but not legged ones.

Key Players
  • Boston Dynamics
  • Schaft
  • Honda

Walking is an extraordinary feat of biomechanical engineering. Every step requires balance and the ability to adapt to instability in a split second. It requires quickly adjusting where your foot will land and calculating how much force to apply to change direction suddenly. No wonder, then, that until now robots have not been very good at it.

Meet Atlas, a humanoid robot created by Boston Dynamics, a company that Google acquired in December 2013. It can walk across rough terrain and even run on flat ground. Although previous robots such as Honda’s ASIMO and Sony’s diminutive QRIO are able to walk, they cannot quickly adjust their balance; as a result, they are often awkward, and limited in practical value. Atlas, which has an exceptional sense of balance and can stabilize itself with ease, demonstrates the abilities that robots will need to move around human environments safely and easily.

Robots that walk properly could eventually find far greater use in emergency rescue operations. They could also play a role in routine jobs such as helping elderly or physically disabled people with chores and daily tasks in the home.

Marc Raibert, cofounder of Boston Dynamics, pioneered machines with “dynamic balance”—the use of continual motion to stay upright—in the early 1980s. As a professor at Carnegie Mellon University, he built a one-legged robot that leaped around his lab like a pogo stick possessed, calculating with each jump how to reposition its leg and its body, and how aggressively to push itself off the ground with its next bound. Atlas demonstrates dynamic balance as well, using high-powered hydraulics to move its body in a way that keeps it steady. The robot can walk across an unsteady pile of debris, walk briskly on a treadmill, and stay balanced on one leg when whacked with a 20-pound wrecking ball. Just as you instinctively catch yourself when pushed, shifting your weight and repositioning your legs to keep from falling over, Atlas can sense its own instability and respond quickly enough to right itself. The possibilities opened up by its humanlike mobility surely impressed Google. Though it’s not clear why the company is acquiring robotics businesses, it bought seven others last year, including ones specializing in vision and manipulation.

Atlas isn’t ready to take on home or office chores: its powerful diesel engine is external and noisy, and its titanium limbs thrash around dangerously. But the robot could perform repair work in environments too dangerous for emergency workers to enter, such as the control room of a nuclear power plant on the brink of a meltdown. “If your goals are to make something that’s the equivalent of a person, we have a ways to go,” Raibert says. But as it gets up and running, Atlas won’t be a bad example to chase after.

Will Knight

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

Oculus Rift

 

Oculus Rift’s Virtual Reality Headset Could Kick-Start a Revolution Beyond Video Games - MIT Technology Review 2014-05-02 05-08-11

Thirty years after virtual-reality goggles and immersive virtual worlds made their debut, the technology finally seems poised for widespread use.

Breakthrough

High-quality virtual-reality hardware that is cheap enough for the consumer market.

Why It Matters

Visually immersive interfaces will lead to new forms of entertainment and communications.

Key Players
  • Oculus Vry
  • Vuzix
  • Nvidia

Palmer Luckey had not been born when The Lawnmower Man was released in 1992, but the movie, with its vision of computer-generated sensory immersion, helped seed his interest in virtual reality as soon as he saw it. He dreamed of playing video games in simulated 3-D worlds—a dream that led him to amass one of the world’s largest collections of head-mounted displays and, eventually, inspired him to attempt to make his own. With no formal engineering training, Luckey designed his first working prototype in his garage at the age of 16.

Today, the 21-year-old is the founder of Oculus VR, a company that is on the verge of releasing the Rift, an affordable virtual-reality headset for playing ultra-immersive video games. Facebook bought the company for $2 billion this spring.

Oculus VR had already attracted more than $91 million in venture funding, a near-fanatical following, and team members like the game programmer John Carmack, who led the development of influential video games such as Doom, Quake, and Rage. But the Facebook deal is a sign of faith that virtual reality is now sharp enough and cheap enough to have huge potential for more than video games. The idea of merging immersive virtual reality with social communications is intriguing. It could also be a compelling tool for teleconferencing, online shopping, or more passive forms of entertainment. Some filmmakers are, in fact, already experimenting with movies designed just for the Rift.

Virtual-reality headsets could be found in some arcades when The Lawnmower Man was in the theaters. But the technology wasn’t good enough to catch on widely. This time around, Luckey realized that cheap smartphone components could be combined to stunning effect, rendering bright, crisp worlds much more compelling than the blocky graphics often seen through earlier virtual-­reality headsets.

When you use the Rift, you feel as though you’re actually inside these worlds. The technology follows the movement of your head in real time; lean in to take a better look at a virtual flower or look to the skies to gaze at a virtual cloud, and your mind is drawn into the simulation. You can almost believe you are fully there.

The vast audience for home video games appears hungry for the device. In August 2012, Oculus VR set out to raise $250,000 on Kickstarter and met the goal in a matter of hours. It surpassed $1 million within two days.

Luckey started shipping a version of the Rift for software developers in March 2013 for just $300, and in the past year, the hardware has improved significantly. The retail version, which is expected to launch later this year or early next, will offer resolution higher than 1,920 by 1,080 pixels per eye. Such stunningly sharp definition has only recently become possible at such a low price.

While video games are where this improved virtual-reality technology is likely to take off first, it could also have applications in telepresence, architecture, computer-aided design, emergency response training, and phobia therapy.

Indeed, in some niches, older VR technology has been in use for years. Some surgeons routinely practice operations using VR simulations, while some industrial designers use the technology to view their designs as if they had already been constructed. But 30 years ago, when Jaron Lanier founded VPL Research, the first company to sell virtual-reality goggles, such products were too expensive for the consumer mainstream (a single head-mounted display cost as much as $100,000).

There were other reasons, too, that earlier versions of virtual reality failed commercially. Players of Nintendo’s Virtual Boy, a low-end VR game system launched in the mid-1990s, complained of nausea after extended play. For other players, the keen sense of wonder and presence they felt inside a virtual world soon dissipated. “Your first time playing a game in a virtual world is incredible,” Lanier says, “but the 20th time is wearying.”

Things may be different now. Though some testers have experienced nausea using the Oculus Rift, the company says the latest version has almost eliminated this problem. And today’s virtual environments offer so much more fidelity that they could remain captivating for much longer. Artists have been able to create a more stimulating range of worlds, from the rigorously realistic to the more abstract and painterly.

Already Oculus has inspired imitators. Acknowledging the Rift as an inspiration, Sony has demonstrated a VR headset that players will be able to use with the PlayStation 4. Sony is also working with NASA to create a virtual-reality simulation of Mars using images pulled from the Mars Rover. A more mundane but potentially useful application that Sony is exploring would let travelers visit virtual hotel rooms before booking the real thing. Assuming they ever want to take the headsets off.

Simon Parkin

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

Mobile Collaboration

 

The smartphone era is finally getting the productivity software it needs.

Breakthrough

Services that make it fruitful to create and edit documents on mobile devices.

Why It Matters

Much of today’s office work is done outside an office.

Key Players
  • Quickoffice
  • Dropbox
  • Microsoft
  • Google
  • CloudOn

One afternoon last fall, David Levine took the subway from his office in lower Manhattan to a meeting at Rockefeller Center in midtown. The 35-year-old CIO of the startup investment firm Artivest was working on a blog post with colleagues and with freelancers in Boston and Crete. Levine used a new app called Quip to type the post on his iPhone, his wireless connection waxing and waning as the F train clattered through the tunnels. Quip let the team make changes, add comments, and chat via text, all presented in a Facebook-style news feed. Whenever Levine’s connection returned, the app synchronized his contributions with everyone else’s, so they all were working on the same version.

Had they been working with a traditional word-processing program, the process would probably have been a drawn-out round-robin of e-mail messages, proliferating attachments, and manual collation of disparate contributions. Instead, “by the time I got out of the subway, the post was done,” Levine recalls, “and by the time I got out of the meeting, it was on the website.”

It has taken a while for the software that helps people get work done to catch up with the fact that many people are increasingly working on tablets and phones. Now new apps are making it easier to create and edit documents on the go. Meanwhile, cloud-based file storage services, including Box, Dropbox, Google Drive, and Microsoft’s ­OneDrive—which have plunged in cost and soared in usage—help keep the results in sync even as multiple users work on the same file simultaneously. Some cloud services do this by separating what look to users like unified files into separate entries—paragraphs, words, even individual characters—in easily manipulated databases. That lets them smoothly track and merge changes made by different people at different times.

But the most interesting new mobile collaboration services don’t just replicate the software we’re accustomed to using on desktop computers. They also highlight an aspect of group work that received scant attention in the days when coworkers gathered together in offices: the communication that is part and parcel of collaboration. That back-and-forth can have as much value as the content itself. It can keep the team on track, inform participants who join the process late, and spark new ideas.

In traditional word-processing software, much of that conversation gets lost in “notes,” comments, or e-mail. But new document-­editing apps capture the stream of collaborative communication and put it on equal footing with the nominal output of the process. Box’s document-­collaboration service Box Notes displays avatar icons along the left-hand margin to show who contributed what; CloudOn, a mobile editor for Microsoft Office documents, gives prime placement to both conversations (comments, messages) and tasks (editing, approvals, permissions); and Quip displays a running text-message thread.

“It’s like you walked over to someone’s desk and said, ‘Read this and let me know if you have any questions,’” says Bret ­Taylor, Quip’s founder and CEO, who was formerly CTO at Facebook. “It’s a very personal, intimate experience that has been lost since the days of e-mail.”

By incorporating streams of messages about the work being created, these apps reflect the fact that many communications are now brief, informal, and rapid. “Most younger people rely on short-form mobile messaging and use e-mail only for more formal communications,” ­Taylor points out.

For Levine, who has been known to fire off a blog post before getting out of bed in the morning (much to his wife’s dismay), this mobile way of working is far more consonant with the way he lives—striving to squeeze every last iota of productivity out of each moment. “It allows me to accomplish what I need to do without interrupting my flow,” he says. Even when he’s in a subway tunnel.

Ted Greenwald

 

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

Microscale 3-D Printing

 

Inks made from different types of materials, precisely applied, are greatly expanding the kinds of things that can be printed.

Breakthrough

3-D printing that uses multiple materials to create objects such as biological tissue with blood vessels.

Why It Matters

Making biological materials with desired functions could lead to artificial organs and novel cyborg parts.

Key Players

To show off its ability to do multimaterial 3-D printing, Lewis’s lab has printed a complex lattice using different inks.

Despite the excitement that 3-D printing has generated, its capabilities remain rather limited. It can be used to make complex shapes, but most commonly only out of plastics. Even manufacturers using an advanced version of the technology known as additive manufacturing typically have expanded the material palette only to a few types of metal alloys. But what if 3-D printers could use a wide assortment of different materials, from living cells to semiconductors, mixing and matching the “inks” with precision?

Jennifer Lewis, a materials scientist at Harvard University, is developing the chemistry and machines to make that possible. She prints intricately shaped objects from “the ground up,” precisely adding materials that are useful for their mechanical properties, electrical conductivity, or optical traits. This means 3-D printing technology could make objects that sense and respond to their environment. “Integrating form and function,” she says, “is the next big thing that needs to happen in 3-D printing.”

Left: For the demonstration, the group formulated four polymer inks, each dyed a different color.
Right: The different inks are placed in standard print heads.
Bottom: By sequentially and precisely depositing the inks in a process guided by the group’s software, the printer quickly produces the colorful lattice.

A group at Princeton University has printed a bionic ear, combining biological tissue and electronics (see “Cyborg Parts”), while a team of researchers at the University of Cambridge has printed retinal cells to form complex eye tissue. But even among these impressive efforts to extend the possibilities of 3-D printing, Lewis’s lab stands out for the range of materials and types of objects it can print.

Last year, Lewis and her students showed they could print the microscopic electrodes and other components needed for tiny lithium-ion batteries (see “Printing Batteries”). Other projects include printed sensors fabricated on plastic patches that athletes could one day wear to detect concussions and measure violent impacts. Most recently, her group printed biological tissue interwoven with a complex network of blood vessels. To do this, the researchers had to make inks out of various types of cells and the materials that form the matrix supporting them. The work addresses one of the lingering challenges in creating artificial organs for drug testing or, someday, for use as replacement parts: how to create a vascular system to keep the cells alive.

Top: Inks made of silver nanoparticles are used to print electrodes as small as a few micrometers. 
Bottom: As in the other 3-D printing processes, the operation is controlled and monitored by computers.

Left: Jennifer Lewis’s goal is to print complex architectures that integrate form and function.
Right: A glove with strain sensors is made by printing electronics into a stretchable elastomer.

In a basement lab a few hundred yards from Lewis’s office, her group has jury-rigged a 3-D printer, equipped with a microscope, that can precisely print structures with features as small as one micrometer (a human red blood cell is around 10 micrometers in diameter). Another, larger 3-D printer, using printing nozzles with multiple outlets to print multiple inks simultaneously, can fabricate a meter-sized sample with a desired microstructure in minutes.

The secret to Lewis’s creations lies in inks with properties that allow them to be printed during the same fabrication process. Each ink is a different material, but they all can be printed at room temperature. The various types of materials present different challenges; cells, for example, are delicate and easily destroyed as they are forced through the printing nozzle. In all cases, though, the inks must be formulated to flow out of the nozzle under pressure but retain their form once in place—think of toothpaste, Lewis says.

Left: The ­largest printer in Lewis’s lab makes objects up to a meter by a meter.
Top: For such jobs, the printer uses a 64- or 128-­nozzle array to speed up the process.
Bottom: A test sample with a layered microstructure was printed in minutes using wax ink.

Before coming to Harvard from the University of Illinois at Urbana-­Champaign last year, Lewis had spent more than a decade developing 3-D printing techniques using ceramics, metal nanoparticles, polymers, and other nonbiological materials. When she set up her new lab at Harvard and began working with biological cells and tissues for the first time, she hoped to treat them the same way as materials composed of synthetic particles. That idea might have been a bit naïve, she now acknowledges. Printing blood vessels was an encouraging step toward artificial tissues capable of the complex biological functions found in organs. But working with the cells turns out to be “really complex,” she says. “And there’s a lot more that we need to do before we can print a fully functional liver or kidney. But we’ve taken the first step.”

David Rotman

 

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

The Experiment

 

By Christina Larson

Until recently, Kunming, capital of China’s southwestern Yunnan province, was known mostly for its palm trees, its blue skies, its laid-back vibe, and a steady stream of foreign backpackers bound for nearby mountains and scenic gorges. But Kunming’s reputation as a provincial backwater is rapidly changing. On a plot of land on the outskirts of the city—wilderness 10 years ago, and today home to a genomic research facility—scientists have performed a provocative experiment. They have created a pair of macaque monkeys with precise genetic mutations.

Last November, the female monkey twins, Mingming and Lingling, were born here on the sprawling research campus of Kunming Biomedical International and its affiliated Yunnan Key Laboratory of Primate Biomedical Research. The macaques had been conceived via in vitro fertilization. Then scientists used a new method of DNA engineering known as CRISPR to modify the fertilized eggs by editing three different genes, and they were implanted into a surrogate macaque mother. The twins’ healthy birth marked the first time that CRISPR has been used to make targeted genetic modifications in primates—potentially heralding a new era of biomedicine in which complex diseases can be modeled and studied in monkeys.

CRISPR, which was developed by researchers at the University of California, Berkeley, Harvard, MIT, and elsewhere over the last several years, is already transforming how scientists think about genetic engineering, because it allows them to make changes to the genome precisely and relatively easily (see “Genome Surgery,” March/April). The goal of the experiment at Kunming is to confirm that the technology can create primates with multiple mutations, explains Weizhi Ji, one of the architects of the experiment.

Ji began his career at the government-affiliated Kunming Institute of Zoology in 1982, focusing on primate reproduction. China was “a very poor country” back then, he recalls. “We did not have enough funding for research. We just did very simple work, such as studying how to improve primate nutrition.” China’s science ambitions have since changed dramatically. The campus in Kunming boasts extensive housing for monkeys: 75 covered homes, sheltering more than 4,000 primates—many of them energetically swinging on hanging ladders and scampering up and down wire mesh walls. Sixty trained animal keepers in blue scrubs tend to them full time.

The lab where the experiment was performed includes microinjection systems, which are microscopes pointed at a petri dish and two precision needles, controlled by levers and dials. These are used both for injecting sperm into eggs and for the gene editing, which uses “guide” RNAs that direct a DNA-cutting enzyme to genes. When I visited, a young lab technician was intently focused on twisting dials to line up sperm with an egg. Injecting each sperm takes only a few seconds. About nine hours later, when an embryo is still in the one-cell stage, a technician will use the same machine to inject it with the CRISPR molecular components; again, the procedure takes just a few seconds.

During my visit in late February, the twin macaques were still only a few months old and lived in incubators, monitored closely by lab staff. Indeed, Ji and his coworkers plan to continue to closely watch the monkeys to detect any consequences of the pioneering genetic modifications.

 

The Impact

By Amanda Schaffer

The new genome-editing tool called CRISPR, which researchers in China used to genetically modify monkeys, is a precise and relatively easy way to alter DNA at specific locations on chromosomes. In early 2013, U.S. scientists showed it could be used to genetically engineer any type of animal cells, including human ones, in a petri dish. But the Chinese researchers were the first to demonstrate that this approach can be used in primates to create offspring with specific genetic alterations.

“The idea that we can modify primates easily with this technology is powerful,” says Jennifer Doudna, a professor of molecular and cell biology at the University of California, Berkeley, and a developer of CRISPR. The creation of primates with intentional gene alterations could lead to powerful new ways to study complex human diseases. It also poses new ethical dilemmas. From a technical perspective, the Chinese primate research suggests that scientists could probably alter fertilized human eggs with CRISPR; if monkeys are any guide, such eggs could grow to be genetically modified babies. But “whether that would be a good idea is a much harder question,” says Doudna.

The prospect of designer babies remains remote and far from the minds of most researchers developing CRISPR. Far more imminent are the potential opportunities to create animals with mutations linked to human disorders. Experimenting with primates is expensive and can raise concerns about animal welfare, says Doudna. But the demonstration that CRISPR works in monkeys has gotten “a lot of people thinking about cases where primate models may be important.”

At the top of that list is the study of brain disorders. Robert Desimone, director of MIT’s McGovern Institute for Brain Research, says that there is “quite a bit of interest” in using CRISPR to generate monkey models of diseases like autism, schizophrenia, Alzheimer’s disease, and bipolar disorder. These disorders are difficult to study in mice and other rodents; not only do the affected behaviors differ substantially between these animals and humans, but the neural circuits involved in the disorders can be different. Many experimental psychiatric drugs that appeared to work well in mice have not proved successful in human trials. As a result of such failures, many pharmaceutical companies have scaled back or abandoned their efforts to develop treatments.

Primate models could be especially helpful to researchers trying to make sense of the growing number of mutations that genetic studies have linked to brain disorders. The significance of a specific genetic variant is often unclear; it could be a cause of a disorder, or it could just be indirectly associated with the disease. CRISPR could help researchers tease out the mutations that actually cause the disorders: they would be able to systematically introduce the suspected genetic variants into monkeys and observe the results. CRISPR is also useful because it allows scientists to create animals with different combinations of mutations, in order to assess which onesor which combinations of themmatter most in causing disease. This complex level of manipulation is nearly impossible with other methods.

Guoping Feng, a professor of neuroscience at MIT, and Feng Zhang, a colleague at the Broad Institute and McGovern Brain Institute who showed that CRISPR could be used to modify the genomes of human cells, are working with Chinese researchers to create macaques with a version of autism. They plan to mutate a gene called SHANK3 in fertilized eggs, producing monkeys that can be used to study the basic science of the disorder and test possible drug treatments. (Only a small percentage of people with autism have the SHANK3 mutation, but it is one of the few genetic variants that lead to a high probability of the disorder.)

The Chinese researchers responsible for the birth of the genetically engineered monkeys are still focusing on developing the technology, says Weizhi Ji, who helped lead the effort at the Yunnan Key Laboratory of Primate Biomedical Research in Kunming. However, his group hopes to create monkeys with Parkinson’s, among other brain disorders. The aim would be to look for early signs of the disease and study the mechanisms that allow it to progress.

The most dramatic possibility raised by the primate work, of course, would be using CRISPR to change the genetic makeup of human embryos during in vitro fertilization. But while such manipulation should be technically possible, most scientists do not seem eager to pursue it.

Indeed, the safety concerns would be daunting. When you think about “messing with a single cell that is potentially going to become a living baby,” even small errors or side effects could turn out to have enormous consequences, says Hank Greely, director of the Center for Law and the Biosciences at Stanford. And why even bother? For most diseases with simple genetic causes, it wouldn’t be worthwhile to use CRISPR; it would make more sense for couples to “choose a different embryo that doesn’t have the disease,” he says. This is already possible as part of in vitro fertilization, using a procedure called preimplantation genetic diagnosis.

It’s possible to speculate that parents might wish to alter multiple genes in order to reduce children’s risk, say, of heart disease or diabetes, which have complex genetic components. But for at least the next five to 10 years, that, says Greely, “just strikes me as borderline crazy, borderline implausible.” Many, if not most, of the traits that future parents might hope to alter in their kids may also be too complex or poorly understood to make reasonable targets for intervention. Scientists don’t understand the genetic basis, for instance, of intelligence or other higher-order brain functions—and that is unlikely to change for a long time.

Ji says creating humans with CRISPR-edited genomes is “very possible,” but he concurs that “considering the safety issue, there would still be a long way to go.” In the meantime, his team hopes to use genetically modified monkeys to “establish very efficient animal models for human diseases, to improve human health in the future.”

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

Brain Mapping

 

A new map, a decade in the works, shows structures of the brain in far greater detail than ever before, providing neuroscientists with a guide to its immense complexity.

Breakthrough

A high-resolution map that shows structures of the human brain as small as 20 micrometers.

Why It Matters

As neuroscientists try to understand how the brain works, they need a detailed map of its anatomy.

Key Players
  • Katrin Amunts, Jülich Research Centre
  • Alan Evans, Montreal Neurological Institute
  • Karl Deisseroth, Stanford University

Neuroscientists have made remarkable progress in recent years toward understanding how the brain works. And in coming years, Europe’s Human Brain Project will attempt to create a computational simulation of the human brain, while the U.S. BRAIN Initiative will try to create a wide-ranging picture of brain activity. These ambitious projects will greatly benefit from a new resource: detailed and comprehensive maps of the brain’s structure and its different regions.

A section of the human brain map created by a team of international researchers shows details as small as 20 micrometers.

As part of the Human Brain Project, an international team of researchers led by German and Canadian scientists has produced a three-dimensional atlas of the brain that has 50 times the resolution of previous such maps. The atlas, which took a decade to complete, required slicing a brain into thousands of thin sections and digitally stitching them back together with the help of supercomputers. Able to show details as small as 20 micrometers, roughly the size of many human cells, it is a major step forward in understanding the brain’s three-dimensional anatomy.

To guide the brain’s digital reconstruction, researchers led by Katrin Amunts at the Jülich Research Centre in Germany initially used an MRI machine to image the postmortem brain of a 65-year-old woman. The brain was then cut into ultrathin slices. The scientists stained the sections and then imaged them one by one on a flatbed scanner. Alan Evans and his coworkers at the Montreal Neurological Institute organized the 7,404 resulting images into a data set about a terabyte in size. Slicing had bent, ripped, and torn the tissue, so Evans had to correct these defects in the images. He also aligned each one to its original position in the brain. The result is mesmerizing: a brain model that you can swim through, zooming in or out to see the arrangement of cells and tissues.

At the start of the 20th century, a German neuroanatomist named Korbinian Brodmann parceled the human cortex into nearly 50 different areas by looking at the structure and organization of sections of brain under a microscope. “That has been pretty much the reference framework that we’ve used for 100 years,” Evans says. Now he and his coworkers are redoing ­Brodmann’s work as they map the borders between brain regions. The result may show something more like 100 to 200 distinct areas, providing scientists with a far more accurate road map for studying the brain’s different functions.

“We would like to have in the future a reference brain that shows true cellular resolution,” says Amunts—about one or two micrometers, as opposed to 20. That’s a daunting goal, for several reasons. One is computational: Evans says such a map of the brain might contain several petabytes of data, which computers today can’t easily navigate in real time, though he’s optimistic that they will be able to in the future. Another problem is physical: a brain can be sliced only so thin.

Advances could come from new techniques that allow scientists to see the arrangement of cells and nerve fibers inside intact brain tissue at very high resolution. Amunts is developing one such technique, which uses polarized light to reconstruct three-­dimensional structures of nerve fibers in brain tissue. And a technique called Clarity, developed in the lab of Karl Deisseroth, a neuroscientist and bioengineer at Stanford University, allows scientists to directly see the structures of neurons and circuitry in an intact brain. The brain, like any other tissue, is usually opaque because the fats in its cells block light. Clarity melts the lipids away, replacing them with a gel-like substance that leaves other structures intact and visible. Though Clarity can be used on a whole mouse brain, the human brain is too big to be studied fully intact with the existing version of the technology. But Deisseroth says the technique can already be used on blocks of human brain tissue thousands of times larger than a thin brain section, making 3-D reconstruction easier and less error prone. And Evans says that while Clarity and polarized-light imaging currently give fantastic resolution to pieces of brain, “in the future we hope that this can be expanded to include a whole human brain.”

Courtney Humphries

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

Agricultural Drones

 

 

Cheap Drones Give Farmers a New Way to Improve Crop Yields - MIT Technology Review 2014-05-02 04-25-41

Relatively cheap drones with advanced sensors and imaging capabilities are giving farmers new ways to increase yields and reduce crop damage.

Breakthrough

Easy-to-use ­agricultural drones equipped with ­cameras, for less than $1,000.

Why It Matters

Close monitoring of crops could improve water use and pest management.

Key Players
  • 3D Robotics
  • Yamaha
  • PrecisionHawk

Ryan Kunde is a winemaker whose family’s picture-perfect vineyard nestles in the Sonoma Valley north of San Francisco. But Kunde is not your average farmer. He’s also a drone operator—and he’s not alone. He’s part of the vanguard of farmers who are using what was once military aviation technology to grow better grapes using pictures from the air, part of a broader trend of using sensors and robotics to bring big data to precision agriculture.

Top: A drone from PrecisionHawk is equipped with multiple sensors to image fields.
Bottom: This image depicts vegetation in near-­infrared light to show chlorophyll levels.

What “drones” means to Kunde and the growing number of farmers like him is simply a low-cost aerial camera platform: either miniature fixed-wing airplanes or, more commonly, quadcopters and other multibladed small helicopters. These aircraft are equipped with an autopilot using GPS and a standard point-and-shoot camera controlled by the autopilot; software on the ground can stitch aerial shots into a high-­resolution mosaic map. Whereas a traditional radio-­controlled aircraft needs to be flown by a pilot on the ground, in Kunde’s drone the autopilot (made by my company, 3D Robotics) does all the flying, from auto takeoff to landing. Its software plans the flight path, aiming for maximum coverage of the vineyards, and controls the camera to optimize the images for later analysis.

This low-altitude view (from a few meters above the plants to around 120 meters, which is the regulatory ceiling in the United States for unmanned aircraft operating without special clearance from the Federal Aviation Administration) gives a perspective that farmers have rarely had before. Compared with satellite imagery, it’s much cheaper and offers higher resolution. Because it’s taken under the clouds, it’s unobstructed and available anytime. It’s also much cheaper than crop imaging with a manned aircraft, which can run $1,000 an hour. Farmers can buy the drones outright for less than $1,000 each.

The advent of drones this small, cheap, and easy to use is due largely to remarkable advances in technology: tiny MEMS sensors (accelerometers, gyros, magnetometers, and often pressure sensors), small GPS modules, incredibly powerful processors, and a range of digital radios. All those components are now getting better and cheaper at an unprecedented rate, thanks to their use in smartphones and the extraordinary economies of scale of that industry. At the heart of a drone, the autopilot runs specialized software—often open-source programs created by communities such as DIY Drones, which I founded, rather than costly code from the aerospace industry.

Drones can provide farmers with three types of detailed views. First, seeing a crop from the air can reveal patterns that expose everything from irrigation problems to soil variation and even pest and fungal infestations that aren’t apparent at eye level. Second, airborne cameras can take multispectral images, capturing data from the infrared as well as the visual spectrum, which can be combined to create a view of the crop that highlights differences between healthy and distressed plants in a way that can’t be seen with the naked eye. Finally, a drone can survey a crop every week, every day, or even every hour. Combined to create a time-series animation, that imagery can show changes in the crop, revealing trouble spots or opportunities for better crop management.

It’s part of a trend toward increasingly data-driven agriculture. Farms today are bursting with engineering marvels, the result of years of automation and other innovations designed to grow more food with less labor. Tractors autonomously plant seeds within a few centimeters of their target locations, and GPS-guided harvesters reap the crops with equal accuracy. Extensive wireless networks backhaul data on soil hydration and environmental factors to faraway servers for analysis. But what if we could add to these capabilities the ability to more comprehensively assess the water content of soil, become more rigorous in our ability to spot irrigation and pest problems, and get a general sense of the state of the farm, every day or even every hour? The implications cannot be stressed enough. We expect 9.6 billion people to call Earth home by 2050. All of them need to be fed. Farming is an input-­output problem. If we can reduce the inputs—water and pesticides—and maintain the same output, we will be overcoming a central challenge.

Agricultural drones are becoming a tool like any other consumer device, and we’re starting to talk about what we can do with them. Ryan Kunde wants to irrigate less, use less pesticide, and ultimately produce better wine. More and better data can reduce water use and lower the chemical load in our environment and our food. Seen this way, what started as a military technology may end up better known as a green-tech tool, and our kids will grow up used to flying robots buzzing over farms like tiny crop dusters.

Chris Anderson, the former editor in chief of Wired, is the cofounder and CEO of 3D Robotics and founder of DIY Drones.

 

Technology Review - La rivista del MIT per l'innovazione - Mozilla Firefox 2014-02-27 12.32.02

Press Release: Computational Method Dramatically Speeds Up Estimates of Gene Expression, CMU, UMD Researchers Report

 

"Sailfish" Method Could Pay Dividends as Genomic Medicine Expands

Contact: Byron Spice  / 412-268-9068  / bspice@cs.cmu.edu

Carl KingsfordPITTSBURGH—With gene expression analysis growing in importance for both basic researchers and medical practitioners, researchers at Carnegie Mellon University and the University of Maryland have developed a new computational method that dramatically speeds up estimates of gene activity from RNA sequencing (RNA-seq) data.

With the new method, dubbed Sailfish after the famously speedy fish, estimates of gene expression that previously took many hours can be completed in a few minutes, with accuracy that equals or exceeds previous methods. The researchers' report on their new method is being published online April 20 by the journal Nature Biotechnology.

Gigantic repositories of RNA-seq data now exist, making it possible to re-analyze experiments in light of new discoveries. "But 15 hours a pop really starts to add up, particularly if you want to look at 100 experiments," said Carl Kingsford, an associate professor in CMU's Lane Center for Computational Biology. "With Sailfish, we can give researchers everything they got from previous methods, but faster."

Though an organism's genetic makeup is static, the activity of individual genes varies greatly over time, making gene expression an important factor in understanding how organisms work and what occurs during disease processes. Gene activity can't be measured directly, but can be inferred by monitoring RNA, the molecules that carry information from the genes for producing proteins and other cellular activities.

SailfishRNA-seq is a leading method for producing these snapshots of gene expression; in genomic medicine, it has proven particularly useful in analyzing certain cancers.

The RNA-seq process results in short sequences of RNA, called "reads." In previous methods, the RNA molecules from which they originated could be identified and measured only by painstakingly mapping these reads to their original positions in the larger molecules.

But Kingsford, working with Rob Patro, a post-doctoral researcher in the Lane Center, and Stephen M. Mount, an associate professor in Maryland's Department of Cell Biology and Molecular Genetics and its Center for Bioinformatics and Computational Biology, found that the time-consuming mapping step could be eliminated. Instead, they found they could allocate parts of the reads to different types of RNA molecules, much as if each read acted as several votes for one molecule or another.

Without the mapping step, Sailfish can complete its RNA analysis 20-30 times faster than previous methods.

This numerical approach might not be as intuitive as a map to a biologist, but it makes perfect sense to a computer scientist, Kingsford said. Moreover, the Sailfish method is more robust — better able to tolerate errors in the reads or differences between individuals' genomes. These errors can prevent some reads from being mapped, he explained, but the Sailfish method can make use of all the RNA read "votes," which improves the method's accuracy.

The Sailfish code has been released and is available for download at http://www.cs.cmu.edu/~ckingsf/software/sailfish/.

This work was supported in part by the National Science Foundation and the National Institutes of Health.

Carl Kingsford (pictured above), an associate professor in CMU's Lane Center for Computational Biology, said the new computational method, called Sailfish, can give researchers everything they got from previous methods, but faster. With Sailfish estimates of gene expression that previously took many hours can be completed in a few minutes, with accuracy that equals or exceeds previous methods.

Press Release- Computational Method Dramatically Speeds Up Estimates of Gene Expression, CMU, UMD Researchers Report-Carnegie Mellon News - Carnegie Mellon University 2014-05-02 04-10-50

Ethanol Fuels Ozone Pollution

 

Shifts in the use of gasoline and ethanol to fuel vehicles in Sao Paulo created a unique atmospheric chemistry experiment

Sao Paolo smog

Buildings in the smog looking west towards Barra Funda and Lapa from the Edificio Banespa in downtown Sao Paulo. Credit: Thomas Hobbs via Flickr

Running vehicles on ethanol rather than petrol can increase ground-level ozone pollution, according to a study of fuel use in São Paulo, Brazil.Ozone (O3) is a major urban pollutant that can cause severe respiratory problems. It can form when sunlight triggers chemical reactions involving hydrocarbons and nitrogen oxides (NOx) emitted by vehicles.

Ethanol has been promoted as a ‘green’ fuel because its combustion tends to produce lower emissions of carbon dioxide, hydrocarbons and NOx than petrol. But the impact on air quality of a wholesale transition from petrol to ethanol has been difficult to assess, with different atmospheric chemistry models predicting a variety of consequences.

Alberto Salvo, an economist at the National University of Singapore, and Franz Geiger, a physical chemist at Northwestern University in Evanston, Illinois, have now answered the question with hard data. Their study, published today in Nature Geoscience, unpacks what happened when the motorists of São Paulo — the largest city in the Southern Hemisphere — suddenly changed their fuel habits.

Sugar high
In 2011, about 40% of the city’s 6 million light vehicles — mostly cars — were able to burn pure ethanol or a petrol-ethanol blend, and both fuels were widely available. Consumers in São Paulo thus had more choice over their fuel than almost anywhere else in the world, says Salvo. Between 2009 and 2011, the price of ethanol rose and fell in response to fluctuations in the global prices of sugar, which is used to produce ethanol via fermentation. But the government-controlled gasoline price remained steady. This led to a huge shift in fuel consumption — wholesalers’ figures suggest that gasoline’s share of total transport fuel rose from 42% to 68%. “Our study is the only one where you have a large switch over a relatively short timescale,” says Salvo.

São Paulo also has an extensive network of air-monitoring stations that record the atmospheric consequences of its notorious traffic congestion. Salvo and Geiger collated these air-quality measurements and used other data sets — detailing meteorological and traffic conditions, for example — to weed out other factors that would have affected air quality over that period. Overall, they report, the rise in gasoline consumption caused an average drop of 15 micrograms per cubic meter (15 μgm–3) in ground-level ozone concentration, down from a weekday average of 68 μgm–3.

But air-quality campaigners should not start advocating for petrol instead of ethanol quite yet. Increased petrol burning clearly raised levels of NOx, which also poses direct health concerns, and it probably boosted the amount of particulate matter in the air, something the study did not look at. And because every city has its own unique air chemistry, a similar fuel switch might produce very different results in London or Los Angeles. Nevertheless, says Salvo, the findings illustrate that “ethanol is not a panacea”.

Opportunity NOx
So how could burning more petrol, which puts more of the ingredients for ozone formation into the air, actually reduce São Paulo’s ozone levels?

As nitrogen dioxide becomes more abundant in the air, it increasingly combines with hydroxyl radicals to form nitric acid. This removal of hydroxyl radicals shuts down the reaction that forms ozone. “It’s a really strong quenching effect,” says Sasha Madronich, an atmospheric chemist at the US National Center for Atmospheric Research in Boulder, Colorado, who wrote an accompanying News and Views article on the São Paulo study. At high NOx levels, this quenching begins to outweigh ozone synthesis, and ozone levels drop.

Theoretical models have predicted that such an ‘NOx-inhibited’ situation could arise in cities with relatively high NOx levels, but “it’s never been observed, because there’s no place to observe it”, says Geiger.

The researchers say that their method of combining disparate data sets to tease out the effects of fuel changes could now be used in other cities. Geiger acknowledges that São Paulo is “the best-case scenario in terms of data availability”, but hopes to apply the same method to Chicago, Illinois, which might enable his team to predict the impact of a major shift to vehicles powered by electricity or natural gas, for example.

And by uncovering the real-world impact of fuel changes in São Paulo, the researchers have provided a useful test bed for air-pollution models, adds Madronich. “If a model cannot reproduce these results,” he says, “that’s a problem for the model.”

This article is reproduced with permission from the magazine Nature. The article was first published on April 28, 2014.

A Happy Life May not be a Meaningful Life - Scientific American - Mozilla Firefox 2014-02-19 18.42.38