quarta-feira, 3 de dezembro de 2014

Laser sniffs out toxic gases from afar: System can ID chemicals in atmosphere from a kilometer away

 

The new technology can discriminate one type of gas from another with greater specificity than most remote sensors -- even in complex mixtures of similar chemicals -- and under normal atmospheric pressure, something that wasn't thought possible before.

The researchers say the technique could be used to test for radioactive byproducts from nuclear accidents or arms control treaty violations, for example, or for remote monitoring of smokestacks or factories for signs of air pollution or chemical weapons.

"You could imagine setting this up around the perimeter of an area where soldiers are living, as a kind of trip wire for nerve gas," said lead author Henry Everitt, an Army scientist and adjunct professor of physics at Duke University.

The technique uses a form of invisible light called terahertz radiation, or T-rays.

Already used to detect tumors and screen airport passengers, T-rays fall between microwaves and infrared radiation on the electromagnetic spectrum.

Zapping a gas molecule with a terahertz beam of just the right energy makes the molecule switch between alternate rotational states, producing a characteristic absorption spectrum "fingerprint," like the lines of a bar code.

Terahertz sensors have been used for decades to identify trace gases in the dry, low-pressure conditions of interstellar space or in controlled conditions in the lab, where they are capable of unambiguous identification and ultra-sensitive, part-per-trillion detection.

But until now, efforts to use the same technique to detect trace gases under normal atmospheric conditions have failed because the pressure and water vapor in the air smears and weakens the spectral fingerprint.

In a study published in the journal Physical Review Applied, Everitt, Ohio State University physicist Frank De Lucia and colleagues have developed a way around this problem.

Their approach works by blasting a cloud of gas with two beams at once. One is a steady terahertz beam, tuned to the specific rotational transition energy of the gas molecule they're looking for.

The second beam comes from a laser, operating in the infrared, which emits light in high-speed pulses.

At the U.S. Army Aviation and Missile Research, Development, and Engineering Center near Huntsville, Alabama, the researchers have installed a one-of-a-kind infrared laser.

Manufactured by a company called STI Optronics, it's capable of firing dozens of pulses of infrared light a second, each of which is less than a billionth-of-a-second long.

"It's kind of like whacking a molecule with an infrared sledgehammer," Everitt said.

Normal atmospheric pressure still blurs the chemical "bar code" produced by the blast of the Terahertz beam, but the ultra-short pulses of light from the more powerful infrared laser knock the molecule out of equilibrium, causing the smeared absorption lines to flicker.

"We just have to tune each beam to the wavelengths that match the type of molecule we're looking for, and if we see a change, we know it has to be that gas and nothing else," Everitt said.

The researchers directed the two beams onto samples of methyl fluoride, methyl chloride and methyl bromide gases in the lab to determine what combination of laser settings would be required to detect trace amounts of these gases under different weather conditions.

"Terahertz waves will only propagate so far before water vapor in the air absorbs them, which means the approach works a lot better on, say, a cold winter day than a hot summer day," Everitt said.

The researchers say they are able to detect trace gases from up to one kilometer away. But even under ideal weather conditions, the technology isn't ready to be deployed in the field just yet.

For one, converting an eight-foot, one-ton laser into something closer in size to a briefcase will take some time.

Having demonstrated that the technique can work, their next step is to figure out how to tune the beams to detect additional gases.

Initially, they plan to focus on toxic industrial chemicals such as ammonia, carbon disulfide, nitric acid and sulfuric acid.

Eventually, the researchers say their technique could also be useful for law enforcement in detecting toxic gases generated by meth labs, and other situations where detection at the gas's source isn't feasible.

"Point sensing at close range is always better than remote sensing if you can do it, but it's not always possible. These methods let us collect chemical intelligence that tells us what's going on before we get somewhere," Everitt said.

The research was supported by grants from the Defense Threat Reduction Agency (DTRA) and the Defense Advanced Research Projects Agency (DARPA). Additional support was provided by the U.S. Army.

World’s fastest 2-D camera, 100 billion frames per second, may enable new scientific discoveries

 

Photographers have been pursuing the capture of transient scenes at a high imaging speed for centuries. Now, Washington University engineers have developed the world's fastest receive-only 2-D camera that can capture events up to 100 billion frames per second. This image is also the cover illustration of the Dec. 4, 2014, issue of Nature, in which Wang's research appears.

A team of biomedical engineers at Washington University in St. Louis, led by Lihong Wang, PhD, the Gene K. Beare Distinguished Professor of Biomedical Engineering, has developed the world's fastest receive-only 2-D camera, a device that can capture events up to 100 billion frames per second.

That's orders of magnitude faster than any current receive-only ultrafast imaging techniques, which are limited by on-chip storage and electronic readout speed to operations of about 10 million frames per second.

Using the Washington University technique, called compressed ultrafast photography (CUP), Wang and his colleagues have made movies of the images they took with single laser shots of four physical phenomena: laser pulse reflection, refraction, faster-than light propagation of what is called non-information, and photon racing in two media. While it's no day at the races, the images are entertaining, awe-inspiring and represent the opening of new vistas of scientific exploration.

The research appears in the Dec. 4, 2014, issue of Nature.

"For the first time, humans can see light pulses on the fly," Wang says. "Because this technique advances the imaging frame rate by orders of magnitude, we now enter a new regime to open up new visions. Each new technique, especially one of a quantum leap forward, is always followed a number of new discoveries. It's our hope that CUP will enable new discoveries in science -- ones that we can't even anticipate yet."

This camera doesn't look like a Kodak or Cannon; rather, it is a series of devices envisioned to work with high-powered microscopes and telescopes to capture dynamic natural and physical phenomena. Once the raw data are acquired, the actual images are formed on a personal computer; the technology is known as computational imaging.

The development of the technology was funded by two grants from the National Institutes of Health that support pioneering and potentially transformative approaches to major challenges in biomedical research.

"This is an exciting advance and the type of groundbreaking work that these high-risk NIH awards are designed to support," said Richard Conroy, PhD, program director of optical imaging at the National Institute of Biomedical Imaging and Bioengineering, part of the NIH. "These ultrafast cameras have the potential to greatly enhance our understanding of very fast biological interactions and chemical processes and allow us to build better models of complex, dynamical systems."

An immediate application is in biomedicine. One of the movies shows a green excitation light pulsing toward fluorescent molecules on the right where the green converts to red, which is the fluorescence. By tracking this, the researchers can get a single shot assessment of the fluorescence lifetime, which can be used to detect diseases or reflect cellular environmental conditions like pH or oxygen pressure.

Wang envisions applications in astronomy and forensics, where the advanced imaging frame rate could analyze the temporal activities of a supernova that occurred light years away, or track and predict the movements of thousands of potentially hazardous pieces of "space junk," refuse of old satellites and jettisoned space craft hurtling about at high speed in outer space. In forensics, CUP might be used in reproducing bullet pathways, which could once again open up the Kennedy assassination conspiracy theories and revive a more accurate analysis of the strange physics of the "magic bullet."

Wang and his collaborators essentially added components and used algorithms to complement an existing technology known as a streak camera, which measures the intensity variation in a pulse of light with time. While a streak camera is fast, it gives only a one-dimensional view, which "is not intuitive -- much analogous to watching a horse race through a distant vertical slit," Wang said. "We expanded the view into 2-D space, more like what we see in the real world."

CUP photographs an object with a specialty camera lens, which takes the photons from the object on a journey through a tube-like structure to a marvelous tiny apparatus called a digital micromirror device (DMD), smaller than a dime though hosting about 1 million micromirrors, each one just seven by seven microns squared. There, micromirrors are used to encode the image, then reflect the photons to a beam splitter which shoots the photons to the widened slit of a streak camera. The photons are converted to electrons, which are then sheared with the use of two electrodes, converting time to space. The electrodes apply a voltage that ramps from high to low, so the electrons will arrive at different times and land at different vertical positions. An instrument called a charge-coupled device (CCD) stores all the raw data. All of this occurs at the breathtaking pace of 5 nanoseconds. One nanosecond is a billionth of a second.

Wang's work with CUP pushes the dimensional limits of fundamental physics and also pushes the limits of deep imaging of biological tissues, one of Wang's research specialties.

"Fluorescence is an important aspect of biological technologies," he says. "We can use CUP to image the lifetimes of various fluorophores, including fluorescent proteins, at light speed."

In the astronomy world, CUP can be a game-changer, Wang says.

"Combine CUP imaging with the Hubble Telescope, and we will have both the sharpest spatial resolution of the Hubble and the highest temporal solution with CUP," he says. "That combination is bound to discover new science."

This research was funded by the National Institutes of Health grants DP1 EB016986 and R01 CA186567.

CO2 warming effects felt just a decade after emitted

 

Wed, 12/03/2014 - 10:54am

Institute of Physics

It takes just 10 years for a single emission of carbon dioxide to have its maximum warming effects on the Earth.

This is according to researchers at the Carnegie Institute for Science who have dispelled a common misconception that the main warming effects from a carbon dioxide emission will not be felt for several decades.

The results, which have been published in Environmental Research Letters, also confirm that warming can persist for more than a century and suggest that the benefits from emission reductions will be felt by those who have worked to curb the emissions and not just future generations.

Some of these benefits would be the avoidance of extreme weather events, such as droughts, heat waves and flooding, which are expected to increase concurrently with the change in temperature.

However, some of the bigger climate impacts from warming, such as sea-level rise, melting ice sheets and long-lasting damage to ecosystems, will have a much bigger time lag and may not occur for hundreds or thousands of years later, according to the researchers.

Lead author of the study Dr. Katharine Ricke said: “Amazingly, despite many decades of climate science, there has never been a study focused on how long it takes to feel the warming from a particular emission of carbon dioxide, taking carbon-climate uncertainties into consideration.

“A lot of climate scientists may have an intuition about how long it takes to feel the warming from a particular emission of carbon dioxide, but that intuition might be a little bit out of sync with our best estimates from today's climate and carbon cycle models.

To calculate this timeframe, Dr. Ricke, alongside Prof. Ken Caldeira, combined results from two climate modeling projects.

The researchers combined information about the Earth’s carbon cycle—specifically how quickly the ocean and biosphere took up a large pulse of carbon dioxide into the atmosphere—with information about the Earth’s climate system taken from a group of climate models used in the latest IPCC assessment.

The results showed that the median time between a single carbon dioxide emission and maximum warming was 10.1 years, and reaffirmed that most of the warming persists for more than a century.

The reason for this time lag is because the upper layers of the oceans take longer to heat up than the atmosphere. As the oceans take up more and more heat which causes the overall climate to warm up, the warming effects of carbon dioxide emissions actually begin to diminish as carbon dioxide is eventually removed from the atmosphere. It takes around 10 years for these two competing factors to cancel each other out and for warming to be at a maximum.

“Our results show that people alive today are very likely to benefit from emissions avoided today and that these will not accrue solely to impact future generations,” Dr. Ricke continued.

"Our findings should dislodge previous misconceptions about this timeframe that have played a key part in the failure to reach policy consensus.”

Source: Institute of Physics

Brain Training Doesn’t Make You Smarter

 

Scientists doubt claims from brain training companies

December 2, 2014 By David Z. Hambrick

If you’ve spent more than about 5 minutes surfing the web, listening to the radio, or watching TV in the past few years, you will know that cognitive training—better known as “brain training”—is one of the hottest new trends in self improvement. Lumosity, which offers web-based tasks designed to improve cognitive abilities such as memory and attention, boasts 50 million subscribers and advertises on National Public Radio. Cogmed claims to be “a computer-based solution for attention problems caused by poor working memory,” and BrainHQ will help you “make the most of your unique brain.” The promise of all of these products, implied or explicit, is that brain training can make you smarter—and make your life better.

Yet, according to a statement released by the Stanford University Center on Longevity and the Berlin Max Planck Institute for Human Development, there is no solid scientific evidence to back up this promise. Signed by 70 of the world’s leading cognitive psychologists and neuroscientists, the statement minces no words:

"The strong consensus of this group is that the scientific literature does not support claims that the use of software-based “brain games” alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease."

The statement also cautions that although some brain training companies “present lists of credentialed scientific consultants and keep registries of scientific studies pertinent to cognitive training…the cited research is [often] only tangentially related to the scientific claims of the company, and to the games they sell.”

This is bad news for the brain training industry, but it isn’t surprising. Little more than a decade ago, the consensus in psychology was that a person’s intelligence, though not fixed like height, isn’t easily increased. This consensus reflected a long history of failure. Psychologists had been trying to come up with ways to increase intelligence for more than a century, with little success. The consistent finding from this research was that when people practice some task, they get better on that task, and maybe on very similar tasks, but not on other tasks. Play a videogame and you’ll get better at that videogame, and maybe at very similar videogames, the research said, but you won’t get better at real-world tasks like doing your job, driving a car, or filling out your tax return.

What’s more, whenever intelligence gains were reported, they were modest, especially given how much training it took to produce them. In a University of North Carolina study known as the Abecedarian Early Intervention Project, low-income children received intensive intervention from infancy to age 5 that included educational games, while children in a control group received social services, health care and nutritional supplements. At the end of the study, all of the children were given at IQ test, and the average was about 6 points higher for the treatment group than the control group—in statistical terms, a medium size effect. 

Thinking about the modifiability of intelligence started to change in the 2000s. A major impetus was a 2008 study led by Susanne Jaeggi—then a postdoctoral researcher at the University of Michigan and now a professor at the University of California Irvine—and published in the Proceedings of the National Academy of Sciences. Jaeggi and colleagues had a sample of young adults complete a test of reasoning ability to assess “fluid” intelligence—the ability to solve novel problems. The participants were then assigned to either a control group, or to a treatment group in which they practiced a computerized task called “dual n-back,” which requires a person to monitor two streams of information—one auditory and one visual. (The task is challenging, to put it mildly.) Finally, all of the participants took a different version of the reasoning test to see whether the training had any impact on fluid intelligence.

The results were striking. Not only did the training group show more improvement in the reasoning test than the control group, the gain was large enough to have an impact on people’s lives. As John Jonides, the senior University of Michigan scientist on the research team, explained, there was also a dosage-dependent relationship: “Our discovery is that 4 weeks or so of training will produce a noticeable difference in fluid intelligence…We’ve also shown that the longer you train short-term memory, the more improvement you get in IQ.”

The study made a splash. Once published, most studies are cited in the scientific literature no more than a few times, if they are ever cited at all. The Jaeggi study has now been cited over 800 times—an astonishing number for a study published just six years ago. Discover magazine called the findings one of the top 100 scientific discoveries of 2008, and the psychologist Robert Sternberg—author of more than 1,500 publications on intelligence—declared that the study seemed to “to resolve the debate over whether fluid intelligence is, in at least some meaningful measure, trainable.”

Not everyone was convinced. In fact, not long after it was published, the Jaeggi study was No. 1 on a list of the top twenty studies that psychologists would like to see replicated. Above all, what gave the skeptics pause was the magnitude of the reported gain in intelligence—it seemed larger than possible. Studies like the Abecedarian Early Intervention Project had shown that it takes years of intensive intervention to increase IQ by a few points. Jaeggi and colleagues’ findings implied a 6 point increase in just a few hours.

The study had serious flaws, too, making the results difficult to interpret. One problem was that there was no placebo control group—no group that received training in a task that was not expected to increase intelligence (analogous to people in the placebo group of a drug study taking a sugar pill). Instead, the control group was a “no-contact” group, meaning that the people simply took the reasoning test two times, and had no contact with the researchers in between. Therefore, the possibility that the treatment group got better on the reasoning test just because they expected that they would get better could not be ruled out. Further complicating matters, the reasoning test differed across training groups; some of the participants got a 10-minute test, while others got a 20-minute test. Finally, Jaeggi and colleagues only used one test to see whether intelligence improved. Showing that people are better on one reasoning test after training doesn’t mean they’re smarter—it means they’re better at one reasoning test.

With all of this in mind, my colleagues and I set out to replicate Jaeggi and colleagues’ findings. First, we gave people 17 different cognitive ability tests, including 8 tests of fluid intelligence. We then assigned a third of the participants to a treatment group in which they practiced the dual n-back task, a third to a placebo control group in which they practiced another task, and the remaining third to a no-contact control group. Finally, at the end of the study, we gave everyone different versions of the cognitive ability tests. The results were clear: the dual n-back group was no higher in fluid intelligence than the control groups. Not long after we published these results, another group of researchers published a second failure to replicate Jaeggi and colleagues’ findings.

A meta-analysis cast further doubt on the effectiveness of brain training. Synthesizing the results of 23 studies, researchers Monica Melby-Lervåg and Charles Hulme found no evidence that brain training improves fluid intelligence. (A meta-analysis aggregates the results of multiple studies to arrive at more precise estimates of statistical relationships—in this case, the relationship between training and improvement in intelligence.) Jaeggi and colleagues have since published their own meta-analysis, and have come to the slightly more optimistic conclusion that brain training can increase IQ by 3 to 4 points. However, in the best studies in this meta-analysis—those that included a placebo control group—the effect of training was negligible.

In another highly publicized study, published last year in Nature, a team of researchers led by University of California San Francisco professor and entrepreneur Adam Gazzaley gave a sample of older adults training in a custom videogame called Neuroracer. The theory behind Neuroracer was originally proposed by the cognitive psychologists Lynn Hasher and Rose Zacks. In a series of articles, Hasher and Zacks argued that a major cause of what we now call “senior moments”—forgetfulness, inattentiveness, and other mental lapses—is mental “clutter.” That is, as we get older, we are more easily distracted by things in the outside world, and by irrelevant thoughts. Neuroracer is designed to strengthen the ability to filter out distraction. The player’s goal is to steer a car on a windy road with one hand, while using the other hand to shoot down signs of a particular color and shape, ignoring other signs.

Gazzaley and colleagues gave older adults tests of memory, attention, and other cognitive abilities before and after they practiced either Neuroracer or a control task over a 4 week period to assess transfer of training—in other words, to see whether there were any benefits of playing Neuroracer. Not surprisingly, people got better in Neuroracer. In fact, after practicing, the older adults improved to the level of a 20-year old in the game. Moreover, the researchers claimed that there was evidence that playing Neuroracer mitigated effects of aging on certain cognitive functions. But there were problems with this study, too. One critique raised no fewer than 19 concerns about the results and methods. Compared to the placebo group, the training group showed more improvement from pre-test to post-test for only 3 of the 11 transfer measures. Also, the sample size was small, meaning that even these hints of effectiveness may not replicate, and nearly a quarter of the people in the study were dropped from the statistical analyses. Finally, there was no demonstration that the Neuroracer training made people better in real-world tasks. These concerns notwithstanding, with investment from pharmaceutical companies Pfizer and Shire, Gazzaley and colleagues have applied for FDA approval of a new game based on Neuroracer. The goal, as Gazzaley explained in a recent presentation, is for the game to “become the world’s first prescribed videogame.”  

The bottom line is that there is no solid evidence that commercial brain games improve general cognitive abilities. But isn’t it better to go on brain training with the hope, if not the expectation, that scientists will someday discover that it has far-reaching benefits? The answer is no. Scientists have already identified activities that improve cognitive functioning, and time spent on brain training is time that you could spend on these other things. One is physical exercise. In a long series of studies, University of Illinois psychologist Arthur Kramer has convincingly demonstrated that aerobic exercise improves cognitive functioning. The other activity is simply learning new things. Fluid intelligence is hard to change, but “crystallized” intelligence—a person’s knowledge and skills—is not. Learn how to play the piano or cook a new dish, and you have increased your crystallized intelligence. Of course, brain training isn’t free, either. According to one projection, people will spend $1.3 billion on brain training in 2014.

It is too soon to tell whether there are any benefits of brain training. Perhaps there are certain skills that people can learn through brain training that are useful in real life. For example, University of Alabama Birmingham psychologist Karleen Ball and her colleagues have shown that a measure called “useful field of view”—the region of space over which a person can attend to information—can be improved through training and correlates with driving performance. What is clear, though, is that brain training is no magic bullet, and that extraordinary claims of quick gains in intelligence are almost certainly wrong. As the statement from the scientific community on the brain training industry concluded, “much more research is needed before firm conclusions [on brain training] can be drawn.” Until then, time and money spent on brain training is, as likely as not, time and money wasted. 

Biology of anxious temperament may lie with a problem in an anxiety 'off switch'

 

December 2, 2014

Elsevier

Persistent anxiety is one of the most common and distressing symptoms compromising mental health. Most of the research on the neurobiology of anxiety has focused on the generation of increased anxiety, i.e., the processes that “turn on” anxiety. But what if the problem lay with the “off switch” instead? In other words, the dysfunction could exist in the ability to diminish anxiety once it has begun. A new report suggests that deficits in one of the brain’s off switches for anxiety, neuropeptide Y receptors, are decreased in association with anxious temperament.


Persistent anxiety is one of the most common and distressing symptoms compromising mental health. Most of the research on the neurobiology of anxiety has focused on the generation of increased anxiety, i.e., the processes that "turn on" anxiety.

But what if the problem lay with the "off switch" instead? In other words, the dysfunction could exist in the ability to diminish anxiety once it has begun.

A new report in the current issue of Biological Psychiatry by researchers at the University of Wisconsin at Madison suggests that deficits in one of the brain's off switches for anxiety, neuropeptide Y receptors, are decreased in association with anxious temperament.

To conduct their work, the researchers studied 24 young rhesus monkeys to examine expression of the neuropeptide Y system in relation to anxious temperament. Neuropeptide Y is a neurotransmitter that helps regulate the body's response to stress. Anxious temperament is a trait that presents early in life and increases the risk of developing anxiety and depressive disorders.

They found that elevated anxious temperament is associated with decreased messenger RNA expression of two neuropeptide Y receptors, Y1R and Y5R, in the central nucleus of the amygdala, a region of the brain that plays an important role in regulating fear and anxiety.

"This finding is very important as it focuses our thinking about treatment on promoting recovery after stress rather than suppressing the normal adaptive reaction to threatening situations. Fear, at times, is the best possible reaction to life events. However, persistent fear can be destructive. This new finding points us in the direction of new treatments that aim to promote resilience rather than blunting one's life experiences," said Dr. John Krystal, Editor of Biological Psychiatry.

The authors agree, with first author Dr. Patrick Roseboom noting that "extreme anxiety in children is a prominent predictor of the later development of anxiety disorders and other illnesses such as depression and substance abuse. Using young rhesus monkeys in our model of anxious temperament is critical as brain structure and function in non-human primates closely resembles that of humans."

"Identifying the molecular underpinnings of why some individuals are at-risk for developing anxiety and depression has the potential to identify new treatment targets," added Roseboom. "The current findings suggest that focusing on a system that provides resilience may be an important strategy at the molecular level."


Story Source:

The above story is based on materials provided by Elsevier. Note: Materials may be edited for content and length.


Journal Reference:

  1. Patrick H. Roseboom, Steven A. Nanda, Andrew S. Fox, Jonathan A. Oler, Alexander J. Shackman, Steven E. Shelton, Richard J. Davidson, Ned H. Kalin. Neuropeptide Y Receptor Gene Expression in the Primate Amygdala Predicts Anxious Temperament and Brain Metabolism. Biological Psychiatry, 2014; 76 (11): 850 DOI: 10.1016/j.biopsych.2013.11.012

 

Why don't children belong to the clean plate club?

 

Whereas most adults are members of the Clean Plate Club, they eat an average of about 90% of the food they serve themselves, children do not.

New Cornell research aggregated six different studies of 326 elementary school-aged children. It showed that, if their parents are not around, the average child only eats about 60% of what they serve themselves. More than a third goes right in the trash.

Unlike adults, kids are still learning about what foods they like and how much it will take to fill them up. "It's natural, for them to make some mistakes and take a food they don't like or to serve too much," says lead researcher Brian Wansink, author of Slim by Design: Mindless Eating Solutions for Everyday Life and Director of the Cornell Food and Brand Lab. "What's less natural is for them to be forced to eat their 'mistakes' by their parents."

"Yet to a loving, but frustrated parent who wants his/her non-cooperating children to be vegetable-eating members of the Clean Plate Club, there is good news in these results. They show that children who only eat half to two-thirds of the food they serve themselves aren't being wasteful, belligerent, or disrespectful," said Wansink, "They are just being normal children." This should provide comfort and reduce anxiety for frustrated Clean Plate Club parents.


Story Source:

The above story is based on materials provided by Cornell Food & Brand Lab. Note: Materials may be edited for content and length.


Journal Reference:

  1. Wansink, Brian and Katherine A. Johnson. Adults Only: Why Don't Children Belong to the Clean Plate Club? International Journal of Obesity, 2014 (in press)

 

Filtro de madeira remove 99% das bactérias da água

 

3,4 milhões de pessoas morrem todos os anos devido a doenças relacionadas com água, saneamento e higiene.

Há várias tecnologias que podem ajudar a acabar com esta terrível estatística, no entanto, a maioria exige investimento e algum tipo de infra-estrutura de distribuição.

Uma equipe de investigadores do MIT acredita que pode ter descoberto uma nova solução: a madeira.

wood water filter madeira filtro água mit

Os cientistas tentaram filtrar água contaminada, usando pouco mais que um ramo partido de uma árvore e um tubo de plástico. O resultado foi surpreendente: o ramo conseguiu filtrar 99% das bactérias presentes na amostra de água contaminada.

O ramo funciona como um filtro, concentrando as bactérias prejudiciais à saúde humana nas próprias células da madeira. Como se pode ver na fotografia:

mit água portável madeira filtro

Imagem do microscópio eletrónico (cores falsas) mostram bactérias E. coli (verde claro) presas nas células da planta (vermelho e azul) no ramo após a filtração.

Este sistema de filtração de tecnologia simples pode produzir até quatro litros de água potável por dia. Os investigadores estão agora a estudar o potencial de outras plantas que apresentam efeitos semelhantes.

Apesar da excelente notícia, o filtro de madeira não consegue absorver vírus, que são muito mais pequenos que as bactérias.

Fonte: MIT News

 

 

Losing air: Barrage of small impacts likely erased much of the Earth’s primordial atmosphere

 

 

Today's atmosphere likely bears little trace of its primordial self: Geochemical evidence suggests that Earth's atmosphere may have been completely obliterated at least twice since its formation more than 4 billion years ago.

Today's atmosphere likely bears little trace of its primordial self: Geochemical evidence suggests that Earth's atmosphere may have been completely obliterated at least twice since its formation more than 4 billion years ago. However, it's unclear what interplanetary forces could have driven such a dramatic loss.

Now researchers at MIT, Hebrew University, and Caltech have landed on a likely scenario: A relentless blitz of small space rocks, or planetesimals, may have bombarded Earth around the time the moon was formed, kicking up clouds of gas with enough force to permanently eject small portions of the atmosphere into space.

Tens of thousands of such small impacts, the researchers calculate, could efficiently jettison Earth's entire primordial atmosphere. Such impacts may have also blasted other planets, and even peeled away the atmospheres of Venus and Mars.

In fact, the researchers found that small planetesimals may be much more effective than giant impactors in driving atmospheric loss. Based on their calculations, it would take a giant impact -- almost as massive as Earth slamming into itself -- to disperse most of the atmosphere. But taken together, many small impacts would have the same effect, at a tiny fraction of the mass.

Hilke Schlichting, an assistant professor in MIT's Department of Earth, Atmospheric and Planetary Sciences, says understanding the drivers of Earth's ancient atmosphere may help scientists to identify the early planetary conditions that encouraged life to form.

"[This finding] sets a very different initial condition for what the early Earth's atmosphere was most likely like," Schlichting says. "It gives us a new starting point for trying to understand what was the composition of the atmosphere, and what were the conditions for developing life."

Schlichting and her colleagues have published their results in the journal Icarus.

Efficient ejection

The group examined how much atmosphere was retained and lost following impacts with giant, Mars-sized and larger bodies and with smaller impactors measuring 25 kilometers or less -- space rocks equivalent to those whizzing around the asteroid belt today.

The team performed numerical analyses, calculating the force generated by a given impacting mass at a certain velocity, and the resulting loss of atmospheric gases. A collision with an impactor as massive as Mars, the researchers found, would generate a shockwave through Earth's interior, setting off significant ground motion -- similar to simultaneous giant earthquakes around the planet -- whose force would ripple out into the atmosphere, a process that could potentially eject a significant fraction, if not all, of the planet's atmosphere.

However, if such a giant collision occurred, it should also melt everything within the planet, turning its interior into a homogenous slurry. Given the diversity of noble gases like helium-3 deep inside Earth today, the researchers concluded that it is unlikely that such a giant, core-melting impact occurred.

Instead, the team calculated the effects of much smaller impactors on Earth's atmosphere. Such space rocks, upon impact, would generate an explosion of sorts, releasing a plume of debris and gas. The largest of these impactors would be forceful enough to eject all gas from the atmosphere immediately above the impact's tangent plane -- the line perpendicular to the impactor's trajectory. Only a fraction of this atmosphere would be lost following smaller impacts.

To completely eject all of Earth's atmosphere, the team estimated, the planet would need to have been bombarded by tens of thousands of small impactors -- a scenario that likely did occur 4.5 billion years ago, during a time when the moon was formed. This period was one of galactic chaos, as hundreds of thousands of space rocks whirled around the solar system, frequently colliding to form the planets, the moon, and other bodies.

"For sure, we did have all these smaller impactors back then," Schlichting says. "One small impact cannot get rid of most of the atmosphere, but collectively, they're much more efficient than giant impacts, and could easily eject all the Earth's atmosphere."

Runaway effect

However, Schlichting realized that the sum effect of small impacts may be too efficient at driving atmospheric loss. Other scientists have measured the atmospheric composition of Earth compared with Venus and Mars. These measurements have revealed that while each planetary atmosphere has similar patterns of noble gas abundance, the budget for Venus is similar to that of chondrites -- stony meteorites that are primordial leftovers of the early solar system. Compared with Venus, Earth's noble gas budget has been depleted 100-fold.

Schlichting realized that if both planets were exposed to the same blitz of small impactors, Venus' atmosphere should have been similarly depleted. She and her colleagues went back over the small-impactor scenario, examining the effects of atmospheric loss in more detail, to try and account for the difference between the two planets' atmospheres.

Based on further calculations, the team identified an interesting effect: Once half a planet's atmosphere has been lost, it becomes much easier for small impactors to eject the rest of the gas. The researchers calculated that Venus' atmosphere would only have to start out slightly more massive than Earth's in order for small impactors to erode the first half of the Earth's atmosphere, while keeping Venus' intact. From that point, Schlichting describes the phenomenon as a "runaway process -- once you manage to get rid of the first half, the second half is even easier."

Time zero

During the course of the group's research, an inevitable question arose: What eventually replaced Earth's atmosphere? Upon further calculations, Schlichting and her team found the same impactors that ejected gas also may have introduced new gases, or volatiles.

"When an impact happens, it melts the planetesimal, and its volatiles can go into the atmosphere," Schlichting says. "They not only can deplete, but replenish part of the atmosphere."

The group calculated the amount of volatiles that may be released by a rock of a given composition and mass, and found that a significant portion of the atmosphere may have been replenished by the impact of tens of thousands of space rocks.

"Our numbers are realistic, given what we know about the volatile content of the different rocks we have," Schlichting notes.

Jay Melosh, a professor of earth, atmospheric, and planetary sciences at Purdue University, says Schlichting's conclusion is a surprising one, as most scientists have assumed Earth's atmosphere was obliterated by a single, giant impact. Other theories, he says, invoke a strong flux of ultraviolet radiation from the sun, as well as an "unusually active solar wind."

"How the Earth lost its primordial atmosphere has been a longstanding problem, and this paper goes a long way toward solving this enigma," says Melosh, who did not contribute to the research. "Life got started on Earth about this time, and so answering the question about how the atmosphere was lost tells us about what might have kicked off the origin of life."

Going forward, Schlichting hopes to examine more closely the conditions underlying Earth's early formation, including the interplay between the release of volatiles from small impactors and from Earth's ancient magma ocean.

"We want to connect these geophysical processes to determine what was the most likely composition of the atmosphere at time zero, when the Earth just formed, and hopefully identify conditions for the evolution of life," Schlichting says.

Missing ingredient in energy-efficient buildings: Trained people

 

Study building 1.

More than one-third of new commercial building space includes energy-saving features, but without training or an operator's manual many occupants are in the dark about how to use them.

Julia Day recently published a paper in Building and Environment that for the first time shows that occupants who had effective training in using the features of their high-performance buildings were more satisfied with their work environments. Day did the work as a doctoral student at Washington State University; she is now an assistant professor at Kansas State University.

Closed blinds open research path

She was a WSU graduate student in interior design when she walked into an office supposedly designed for energy efficiency and noticed that the blinds were all closed and numerous lights were turned on. The building had been designed to use daylighting strategies to save energy from electric lighting.

After inquiring, Day learned that cabinetry and systems furniture throughout the building blocked nearly half of the occupants from access to the blind controls. Only a few determined folks would climb on or under their desks to operate the blinds.

"People couldn't turn off their lights, and that was the whole point of implementing daylighting in the first place," she said. "The whole experience started me on my path."

Ventilation indicators mistaken for fire-alarm lights

Working with David Gunderson, professor in the WSU School of Design and Construction, Day looked at more than 50 high-performance buildings across the U.S. She gathered data, including their architectural and engineering plans, and did interviews and surveys of building occupants.

She examined how people were being trained in the buildings and whether their training was effective. Sometimes, she learned, the features were simply mentioned in a meeting or a quick email was sent to everyone, and people did not truly understand how their actions could affect the building's overall energy use.

One LEED gold building had lights throughout to indicate the best times of day to open and close windows to take advantage of natural ventilation. A green light indicated it was time to open windows.

"I asked 15 people if they knew what the light meant, and they all thought it was part of the fire alarm system,'' she said. "There's a gap, and people do not really understand these buildings.''

Efficient commercial space expanding

According to CBRE Research, the amount of commercial space that is certified as high-performance in energy efficiency through the U.S. Environmental Protection Agency's Energy Star or U.S. Green Building Council's LEED has grown from 5.6 percent of commercial space in 2005 to 39.3 percent at the end of 2013.

Yet in many cases, the corporate culture of energy use in buildings hasn't caught up. While at home our mothers nagged us to turn off the lights when we left a room or to shut the door because "you don't live in a barn," office culture has often ignored and even discouraged common-sense energy saving.

Educating for an energy-focused culture

Day found that making the best use of a highly efficient building means carefully creating a culture focused on conservation. In buildings with an energy-focused culture, workers were engaged, participated and were satisfied with their building environment.

"If they received good training, they were more satisfied and happier with their work environment,'' she said.

She is working to develop an energy lab and would like to develop occupant training programs to take advantage of high-performance buildings.

"With stricter energy codes, the expectations are that buildings will be more energy efficient and sustainable,'' she said. "But we have to get out of the mindset where we are not actively engaged in our environments. That shift takes a lot of education, and there is a huge gap right now."

A look at how photography has influenced and shaped society through people, art, events and technology.

 

 

Once upon a time

 

As significant snapshots go, it’s hard to trump the substance of the first photograph ever taken. Strange, too, that we only know it was taken in either 1826 or 1827. Frenchman Joseph Nicéphore Niépce’s ‘View From The Window at Le Gras’, as it has come to be known, remains preserved in a state of ambiguity, while today time and date stamps flow matter-of-factly through our devices. Two hundred years later, photography is a medium so pervasive that we are in danger of trivialising its powers. This is an exploration of those powers and their capacity to communicate worlds not necessarily unknown, but certainly perspectives unseen.

This story takes you on a journey from the earliest days of photography along a path that looks at everything from photography as an art form to photos that literally changed the world.

1765 - 1833 | Joseph Nicéphore Niépce

[ 1765 - 1833 ]
Joseph Nicéphore Niépce

The opportunity to influence how a moment is visually represented must be seen as a watershed in human history. Before that juncture, if you didn’t witness something yourself, all you had to go on were the words or drawings of others in your mind’s eye. But as the technology improved and its use quickly spread, photography became another medium through which news could travel and crucially, be sold.

While news corporations and advertisers rubbed their hands with glee, photography’s power to inspire and delight met head on with its capacity to horrify and outrage. As your memory of seismic world events will remind you, your mind works strongest when appalled.

The advent of digital photography in the 1990s meant that the quality of cameras rose at the same time as the price of entry fell. This coincided with the dawn of the internet, so our exposure to powerful images exploded.

Now, our days are awash with the freeze-framed moment. Millions of pictures are uploaded every minute and . This egalitarian reality has created the need for professionals to seek new and untried ways to distinguish their work and elevate their art.

LIGHT + DRAWING = PHOTOGRAPHY

The word photography derives from the Greek photos (“light”) and graphe (“drawing”). A French painter, Hercules Florence, who used it in his diary to describe the process, coined the term.

Is it that they haul precious particles of time and space right into our present moment? Or is it the visual cues of shapes and colours that spark our memories? The answer of course is a combination of both, when we consider what our understanding of human psyche tells us.

Our consciousness is less concerned with spectacle than with connecting the dots.Bird Newspapers and magazines are skilled in pushing agendas or jokes with well-written captions, but photographs are most powerful when our brains make more intensely personal connections with the image.Bird That’s where our instincts to relate and reveal, to celebrate and protect, come into their own.

Captions are not essential to our appreciation of photography (and indeed a relatively recent development) but often the circulation and appropriation of iconic images means they inevitably come to play a part in our understanding of public events. To simplify the matter, Robert Hariman, co-author of No Caption Needed has put it this way: “It might help to ask what one finds in photography that they don’t find elsewhere…”Bird

1.3 million Hurricane Sandy photos were posted to Instagram at a speed of 10 photos per second

Photographer

Photography as a commodity has become a multi-billion dollar industry, made possible by the worlds of fashion, sport and celebrity. With all artistic respect to the 2000 snaps on your phone, there exists an altogether different breed of photograph, those that have made the jump into a realm where iconography, scarcity and contemporary art meet to generate million-dollar valuations and legendary statuses.

Take, for example, ‘Rhein II’, by Andreas Gursky (1999). It’s not known who bought this 12ft-wide, Plexiglass-mounted picture of the Rhine River in Germany for $4.3 million (US) at auction in 2011, but they obviously saw something in it that stirred them. It’s one of a set of six; the others hang at the New York museum of modern art and the Tate Modern in London.

And consider the fees that the uber photographers of the world can command. Names such as Morgan Norman, Lynsey Addario, George Steinmetz, Terry Richardson, Annie Leibovitz regularly hit the rich lists - and for good reason. Their attention to detail and ability to dominate their chosen niches leave little room for competitors, only innovators. These aren’t overnight success stories, they’re careers forged through years of toil and hustle.

1923 LEICA CAMERA $2.8 MILLION

Leica

The most expensive camera ever sold was a rare 1923 Leica camera, which went for $2.8 million at auction in Vienna.

There are different ways to look at the popularisation of photography through technology. Some welcome change and argue in favour of progress. Others bemoan the belittlement of an art form and of an education in shooting on film and working in darkrooms. While our professional-standard camera phones leave little room for the sentimental, they have great storage capacity (even for the dud photos).

The interesting thing about photography is that despite the march of technology, the essence of a captured image that first entranced Joseph Nicéphore Niépce remains. It’s far easier for anyone to capture a technically proficient image, but the skill of making it aesthetically pleasing and emotionally resonant doesn’t come with a digital camera. It remains an elusive skill.

From the first daguerreotype camera through to the very latest digital SLR, the pace of change has been dramatic. Those who have moved with the times will still work fully cognizant of light and exposure, but the difference today is that errors are easily tweaked back on the computer with photo editing software. Every click of the shutter used to cost money, but now there’s far more room to fire away, experiment and take chances. This, perhaps, is technology’s greatest achievement.

JAMES CLERK MAXWELL 1861

The first photo to be shared using a mobile phone was taken in 1997 by Philippe Kahn. He sent snaps from the maternity ward where his daughter Sophie was born. Kahn, an inventor from France, is credited with developing the world's first camera phone.

James Clerk Maxwell, a Scottish physicist, created the first colour photograph in 1861. He photographed a tartan ribbon three times, using a red, blue and yellow filter, and combined the three images into the final colour composite.

One only needs to spend some time browsing Instagram’s most popular images to see that some people are better at acquiring followers than others. So despite a proliferation of social media platforms and photo filtering apps, those with an eye for a frame and a sense of timing are still able to distinguish their abilities.

The psychologist and Nobel Prize winner Daniel Kahneman has pointed out that “The ‘Instagram Generation’ now experiences the present as an anticipated memory”, which is to say that we have a brand new demographic that treats the present moment as something that needs to be reflected upon later. Futurist Jason Silva has pointed out that we now, through cropping, filtering and anticipating ‘Likes’, take it upon ourselves to ‘design’ what that later moment is going to feel like.

Is this a good thing or a bad thing? It’s certainly a real thing and it’s here to stay. While it’s easy to dismiss this development as a narcissistic one, it’s also reasonably straightforward to argue that it’s a liberating development. We have been given the power to design our own memories and narratives, the freedom to choose how we want to remember things.

PHILIPPE KAHN 1997 -

The first photo to be shared using a mobile phone

The first photo to be shared using a mobile phone was taken in 1997 by Philippe Kahn. He sent snaps from the maternity ward where his daughter Sophie was born. Kahn, an inventor from France, is credited with developing the world's first camera phone.

The Arcanum - The Magical Academy for the Mastery of the Arts

Google Lens

Google Lens

Now that photography has become something that happens largely through our mobile devices, its collision course with the world of wearables is clear. Nascent technologies such as Google Glass, the Apple Watch and health monitor Fitbit tell us that everything from brainwave readers to exercise trackers will hit the mainstream within a few years. This isn’t a prediction, it’s a trend, with 90 million wearable devices expected to have shipped by the end of 2014.

In the 1980s, professional photographers looked at emerging photographic technology such as auto focus and assumed that it would never be fast enough for shooting sports or wildlife. Little could they guess, however, that they were within two decades of someone strapping a GoPro camera [Watch the video] to a falcon for high definition to capture extraordinary video footage of the hunt and capture of a crow.

As cloud storage grows more affordable, it does not defy logic to conclude that we will all, perhaps sooner than we realise, be recording every living minute of our days in order for future generations to play back and understand how we lived. Chris Darcy, the self-described ‘most connected man in the world’, who spends most days hooked up to at least ten data-capturing devices has said:

Projects like The Arcanum will see technology take centre stage as the Master & Apprentice model becomes available to anyone with an internet connection. Designed to be a mentor-led, goal-based, social learning experience like no other in existence and is, according to The Arcanum CEO, Peter Giordano, “the most personal, effective and fun way to Level Up your skills in photography.”

- 880 BILLION PHOTOS IN 2014 -

Photos

Every 2 minutes today we snap as many photos as the whole of humanity took in the 1800s. Yahoo meanwhile estimates that in 2014, about 880 billion photographs will be taken. That's 123 photos for every man, woman and child on Earth.

A brave new world or a dystopian mess? Technological change only comes in two packages; incremental or enormously disruptive. Speculating on the darker, scarier possibilities for our future is a fool’s errand, because the only certainty is change itself.Some, like URME are dedicated to protecting the public from surveillance and creating a safe space to explore our digital identities with prosthetic masks.

Let’s focus instead on everything that’s real and beautiful about photography.

As our technologies continue to collect, tweak, save and transmit an ever-growing flux of data, we are uploading nothing less than a new dimension of information. These photos are a reminder that there are landscapes that bind us all, whether they be real or of the mind. The gallery above consists of a selection of images with stories that may not be immediately obvious. When provided with context, however, they pack some emotional clout.

Each picture has a story that tugs at the cords that run through us all. Even though the people and animals featured are complete strangers, it is their accompanying narrative that we as human beings can relate to.

- LEFT VS RIGHT -

Mask

A study by Kelsey Blackburn and James Schriillo from Wake Forest University found that the left side of peoples’ faces are perceived and rated as more aesthetically pleasing than the right.

 

Space

 

Consider, as a curtain closer, these words from Ross Anderson, the deputy editor of Aeon Magazine. Writing about the photographs taken by the Hubble Space Telescope, Anderson said: “Through the sheer aesthetic force of its discoveries, the Hubble distilled the complex abstractions of astrophysics into singular expressions of colour and light, vindicating Keats's famous couplet: Though philosophy has hardly registered it, the Hubble has given us nothing less than an ontological awakening, a forceful reckoning with what is. The telescope compels the mind to contemplate space and time on a scale just shy of the infinite.”