sábado, 31 de janeiro de 2015

The pros and cons of a driverless future

 

 

Despite obvious advantages, the era of autonomous vehicles may not be all plain sailing (P...

Despite obvious advantages, the era of autonomous vehicles may not be all plain sailing (Photo: Shutterstock)

It may seem as if the only arguments against self-driving cars come from two kinds of people – those fearful of any scenario where they might have to forgo control behind the wheel and those who distrust all technology out of hand. Lolling about while a computer gets you through traffic has its attractions for many of us and there has been little discussion about the potential downsides of a driverless future, but a new study has pointed out some potential flaws in this looming auto utopia.

The upsides seem pretty self-evident. The first advantage of driverless cars is one of multitasking. While being taken from here to there in a driverless car, you can do anything you want. Eat, sleep, work, chat with relatives – commute time is no longer down time.

Then there's safety. Although this has yet to be born out, the theory goes that people, on the whole, are not the best drivers out there. We might all think we're Mario Andretti behind the wheel, but sadly and all too obviously, we are not. By leaving the driving to a whole slew of computers, sensors, servos and software, getting from home to office should be rendered accident free.

The other potential plus is efficiently. While a driverless car is using its digital prowess to whisk you to the store, it can, and will, be talking to all the other driverless cars out there. And not only chatting with them, but talking back and forth with smart roadways. This will allow all sorts of efficiencies to be realized. One driverless car will know that the other driverless cars in front of them will not suddenly slow down, so the gaps between can be shortened. And the first driverless car in this line, when it realizes that it will have to slow down, or stop, will relay this information to all the other driverless cars in the line, and they can react as a unit. Intelligent roadways will allow the timing of traffic lights to be optimized to the known level of traffic density and speed.

Swoosh, off you go! From home to office to store and back home again with nary a red light to be seen along the way!

Scott Le Vine, Alireza Zolfagharib, and John Polak have a different take. The researchers from the Department of Geography at SUNY in New Paltz, New York and at the Centre for Transport Studies, Department of Civil and Environmental Engineering, Imperial College London, have recently published a study that, while not slamming on the brakes for self-driving cars, does point out that it might not all be a smooth road ahead.

"Autonomous cars are expected to increase road network capacity and reduce the disutility of travel time," but " ... under certain circumstances these benefits may be in conflict."

The study examined how two of the three stated benefits – more comfort and less traffic – can live together by running a computer simulation of how driverless cars would work in practice, and comparing that to trains.

Commuter trains are used in many the same ways as driverless cars; you get in, sit down, and read a book or do some work on your commute. Trains work very well for this because trains are smooth. Once they are up and running, they are relatively much more comfortable environment over the stretch of a commute than cars, even driverless ones, could potentially be.

It is that potential problem that the study points to. If driverless cars do a lot of stop/start work like normal cars, even at a reduced velocity, they will become a less workable environment. But, if they speed up and slow down more smoothly for the passenger's sake, essentially mimicking the acceleration and deceleration of trains, then self driving cars would forfeit much of their capacity to alleviate congestion in the process.

"Acceleration has big impacts on congestion at intersections because it describes how quickly a vehicle begins to move," says lead researcher Scott Le Vine. "Think about being stuck behind an 18-wheeler when the light turns green. It accelerates very slowly, which means that you're delayed much more than if you were behind a car that accelerated quickly."

The team replicated traffic at a run of the mill four-way urban intersection where 25 percent of the vehicles were driverless. In some simulations, the driverless cars traveled the way that light rail trains do, gently on the gas then gently on the brakes. In other words more comfortable than a normal car, but still herky-jerky at times. In other scenarios, the driverless rides started and stopped like a high-speed rail train: smoothness in all things.

Le Vine and colleagues tinkered with a number of parameters such as longer yellow lights or following distances. They modeled 16 scenarios against a control group with all human-driven cars, running each simulation for an hour and repeating it 100 times. They considered the normal influence each situation had on traffic with regards to delays and road capacity.

The bottom line was this: In every single test, self driving cars where calculated to create a comfortable, rail-like ride made congestion worse than it would have been in a baseline scenario with people behind every wheel.

When driverless cars accelerated and decelerated in the style of light rail, the congestion deteriorated from 4 percent to 50 percent and the number of cars traveling through the intersection also fell between 4 percent and 21 percent. Going for high speed rail style of smoothness, those numbers got even worse: Delays increased from 36 percent to nearly 2,000 percent and intersection capacity fell between 18 percent and 53 percent.

"Our findings suggest a tension in the short run between these two anticipated benefits (more productive use of travel time and increased network capacity), at least in certain circumstances," say the researchers. "It was found that the trade-off between capacity and passenger-comfort is greater if autonomous car occupants program their vehicles to keep within the constraints of HSR (in comparison to LRT)."

It is also worth pointing out that these simulations just involved normal cars and driverless cars. There were no trucks, buses or, that even more chaotic element, pedestrians.

As a single simulation-based study, the research may not add up to a crippling blow for driverless car advocates, but it shows that there's work to be done in order find a liveable recipe for the coming mix of autonomous and conventional vehicles on our roads.

Source: Autonomous cars: The tension between occupant experience and intersection capacity (Science Direct) via The Atlantic.

 

sexta-feira, 30 de janeiro de 2015

Using a single molecule to create a new magnetic field sensor

 

 

Fri, 01/30/2015 - 9:16am

Univ. of Liverpool

 

Scanning tunneling microscope (STM) image of iron phtalocyanine.

Scanning tunneling microscope (STM) image of iron phtalocyanine.Researchers at the Univ. of Liverpool and Univ. College London (UCL) have shown a new way to use a single molecule as a magnetic field sensor.

In a study, published in Nature Nanotechnology, the team shows how magnetism can manipulate the way electricity flows through a single molecule, a key step that could enable the development of magnetic field sensors for hard drives that are a tiny fraction of their present size.

In hard drives, magnetized areas on spinning disks are used to store information. As the magnetized areas pass a magnetic sensor, they trigger fluctuations in electric current flowing through the sensor, allowing the data to be read. Making these areas smaller increases a hard drive's storage capacity without making it bigger, but also requires a smaller sensor.

Fadi El Hallak, a researcher at UCL who conceived of the study and now works for Seagate Technology, said: "Making smaller sensors isn't trivial. It is difficult to use magnetism to control the current flowing through objects the size of single molecules because the response to changes in the magnetic field is often very weak."

To get around this problem, the researchers developed a method of magnifying the effect of the magnetism on the flow of current in the detector.

Mechanical tunneling
First, they created a junction in which a single magnetic molecule was weakly coupled to two metallic leads. The barriers between the molecule and the nearby metals were high enough that electrical charge in the metals could not flow over the barriers.

However, a small fraction of the electrons can effectively go through the barriers by undergoing quantum mechanical tunneling, which enables a tiny current to flow through the molecule when a voltage is applied across it.

The scientists configured the junction so that the molecule was much more strongly connected to one metal lead than the other. The effect of the magnetic field on the tunneling current is then leveraged and greatly enhanced.

Dr. Mats Persson, from the Univ. of Liverpool's Dept. of Chemistry, said: "This research demonstrates a new kind of single molecule sensor for magnetic fields, which is promising for creating new computer technologies."

Source: Univ. of Liverpool

DNA nanoswitches reveal how life’s molecules connect

 

Fri, 01/30/2015 - 8:17am

Kat J. McAlpine, Wyss Institute for Biologically Inspired Engineering

Gel electrophoresis, a common laboratory process, sorts DNA or other small proteins by size and shape using electrical currents to move molecules through small pores in gel. The process can be combined with novel DNA nanoswitches, developed by Wyss Associate Faculty member Wesley Wong, to allow for the simple and inexpensive investigation of life's most powerful molecular interactions. Image: Wyss Institute at Harvard Univ.

Gel electrophoresis, a common laboratory process, sorts DNA or other small proteins by size and shape using electrical currents to move molecules through small pores in gel. The process can be combined with novel DNA nanoswitches, developed by Wyss Associate Faculty member Wesley Wong, to allow for the simple and inexpensive investigation of life's most powerful molecular interactions. Image: Wyss Institute at Harvard Univ.

A complex interplay of molecular components governs almost all aspects of biological sciences—healthy organism development, disease progression and drug efficacy are all dependent on the way life's molecules interact in the body. Understanding these biomolecular interactions is critical for the discovery of new, more effective therapeutics and diagnostics to treat cancer and other diseases, but currently requires scientists to have access to expensive and elaborate laboratory equipment.

Now, a new approach developed by researchers at the Wyss Institute for Biologically Inspired Engineering, Boston Children's Hospital and Harvard Medical School promises a much faster and more affordable way to examine biomolecular behavior, opening the door for scientists in virtually any laboratory world–wide to join the quest for creating better drugs. The findings are published in Nature Methods.

"Biomolecular interaction analysis, a cornerstone of biomedical research, is traditionally accomplished using equipment that can cost hundreds of thousands of dollars," said Wyss Associate Faculty member Wesley P. Wong, PhD, senior author of study. "Rather than develop a new instrument, we've created a nanoscale tool made from strands of DNA that can detect and report how molecules behave, enabling biological measurements to be made by almost anyone, using only common and inexpensive laboratory reagents."

Wong, who is also assistant professor at Harvard Medical School in the Depts. of Biological Chemistry & Molecular Pharmacology and Pediatrics and Investigator at the Program in Cellular and Molecular Medicine at Boston Children's Hospital, calls the new tools DNA "nanoswitches".

Nanoswitches comprise strands of DNA onto which molecules of interest can be strategically attached at various locations along the strand. Interactions between these molecules, such as successful binding of a drug compound with its intended target, such as a protein receptor on a cancer cell, cause the shape of the DNA strand to change from an open and linear shape to a closed loop. Wong and his team can easily separate and measure the ratio of open DNA nanoswitches vs. their closed counterparts through gel electrophoresis, a simple lab procedure already in use in most laboratories, that uses electrical currents to push DNA strands through small pores in a gel, sorting them based on their shape.

"Our DNA nanoswitches dramatically lower barriers to making traditionally complex measurements," said co–first author Ken Halvorsen, formerly of the Wyss Institute and currently a scientist at the RNA Institute at Univ. of Albany. "All of these supplies are commonly available and the experiments can be performed for pennies per sample, which is a staggering comparison to the cost of conventional equipment used to test biomolecular interactions."

To encourage adoption of this method, Wong and his team are offering free materials to colleagues who would like to try using their DNA nanoswitches.

"We've not only created starter kits but have outlined a step–by–step protocol to allow others to immediately implement this method for research in their own labs, or classrooms," said co–first author Mounir Koussa, a graduate candidate in neurobiology at Harvard Medical School.

"Wesley and his team are committed to making an impact on the way bio–molecular research is done at a fundamental level, as is evidenced by their efforts to make this technology accessible to labs everywhere," said Wyss Institute Founding Director Donald Ingber, MD, PhD, who is also the Judah Folkman Professor of Vascular Biology at Boston Children's Hospital and Harvard Medical School and a Professor of Bioengineering at Harvard SEAS. "Biomedical researchers all over the world can start using this new method right away to investigate how biological compounds interact with their targets, using commonly available supplies at very low cost."

Source: Wyss Institute for Biologically Inspired Engineering at Harvard Univ.

Objetos comuns fotografados em superzoom revelam um novo universo de detalhes

 

 

Fonte : Gizmodo Brasil

OpenBiome will pay for poo

 

 

If you think your better half buys a lot of crap, then you might want to consider OpenBiome before starting on the criticism. The American non-profit is paying donors dollars for their doo-doo in an effort to gather more materials for fecal microbiota transplants (FMTs), a relatively new, but 90 percent effective, treatment for the debilitating Clostridium difficile infection (CDI).

Clostridium difficile (C.difficile) are a bacteria found in the soil, air, water, and human and animal feces. Although many can carry the bacteria and suffer no ill effects, others will experience severe diarrhea, abdominal pain and fever. More extreme cases can require hospitalization and perhaps even lead to death. OpenBiome claims that between 14,000 and 30,000 deaths are estimated to be caused by CDI each year in the US alone.

While healthy people are unlikely to develop CDI, those taking antibiotics are at higher risk of picking up cases of C.difficile from infected surfaces. This is because antibiotics kill good as well as bad bacteria in the gastrointestinal tract, providing a site for the bacteria to gain a foothold. According to the Mayo Clinic, the number of cases have increased rapidly over the past two decades, with the elderly who take antibiotics being particularly prone.

Though ceasing the antibiotic used to treat the primary infection will generally be effective on treating mild CDI cases, more targeted antibiotic treatment is the standard procedure for more serious cases. However, the Centers for Disease Control now says that fecal transplants appear to be the most effective method for helping patients with repeat C. difficile infections.

The US Food and Drug Administration has also classified FTM as an "investigational drug" and since the middle of 2013 it has allowed doctors to treat patients with C.difficile without their having to make the usual Investigational New Drug application. The problem, however, is that even as acceptance for fecal transplants grows – the Mayo Clinic began performing them in 2011 – the amount of fecal matter available does not. Typically it is close relations who provide the material.

The OpenBiome team says they founded their non-profit, “after watching a friend and family member suffer through 18 months of C. difficile and 7 rounds of vancomycin before finally receiving a successful, life-changing Fecal Microbiota Transplantation (FMT)." After seeing the difficulty of securing the treatment first hand, they realized that expanded access was key.

We have written about less-traditional uses of FMT before, though in the case of Dr Jeff Leach he was researching gut bacteria and the hunter-gatherer diet, rather than treating a stubborn case of C.difficile.

But why does this work?

The importance lies in gut flora and healthy gut bacteria. Scientists and medical professionals are beginning to understand that the key to many diseases and even allergies may lie in what lives in people’s guts. In fact, "hereditary gut bacteria" may play a role even in obesity, with researchers at Kings College London and Cornell University examining the potential of probiotic treatments for the obese.

FMT involves introducing the stool containing microbes that include bacteria, fungi and viruses from a healthy donor into the gut of the person suffering a C. difficile infection. This is performed more or less in the way one would (or try not to) imagine: via colonoscopic or nasogastric administration – the latter refers to a tube inserted into the nose or mouth and down the throat.

Helpful chart on how you can help others and how many of them you can help (Image: OpenBio...

OpenBiome is seeking healthy donors who will be are financially remunerated at US$40 per sample, with those donating five times each week pocketing an extra $50. However, all donations must be done on-site at their Massachusetts suite and all donors go through a lengthy screening process and are rescreened each 60 days. Donors can also win prizes for the "biggest single donation of the month" or the most donations. According to the non-profit, less than 20 percent of people screened go on to become donors. For those on the receiving end, as a non-profit, the company charges $250 per treatment to cover its costs.

Synthetic stool

For those worried about what Scientific American termed the "ick factor" surrounding FMT, a synthetic stool sample, called "rePOOPulate," was developed last year for the treatment of C. difficile with the results published in the journal Microbiome in 2013.

"Patient concerns" is the way both the rePOOPulate developers and those at OpenBiome describe said “ick” factor inherent in discussions of FMT. OpenBiome is also working to develop a synthetic alternative, partnering with Assembly Biosciences in researching and product supply. “[We] believe that supporting these research efforts will be critical to improving treatment options," says the company.

 

Source: OpenBiome

 

New Portable Vacuum Standard

 

 

January 23, 2015

Contact: Jacob Ricker
(301) 975-4475

portable vacuum standard

The PVS is housed within the white “igloo” enclosure at left. At right is an auxiliary vacuum system used for pressure calibrations.

A novel Portable Vacuum Standard (PVS) has been added to the roster of NIST’s Standard Reference Instruments (SRI). It is now available for purchase as part of NIST’s ongoing commitment to disseminate measurement standards and thereby reduce the need for the expensive and time-consuming process of transporting customer instruments to NIST for calibration.

The PVS is a compact, high-accuracy, low-pressure/vacuum laboratory standard that is calibrated directly against the primary standard at NIST. It enables high-precision calibrations, measurements, or inter-laboratory comparisons (ILC) at a customer’s facility.

It can be used as a direct replacement for a commercial mercury manometer, while offering similar or better uncertainties and lower cost of ownership. The PVS package is also more rugged and does not contain mercury – a significant safety issue for the common high-accuracy manometers that it replaces.

Many customers do not have the technical expertise to build their own standards or only need a standard for a short period of time, such as during an ILC. For about $100,000 per unit (depending on specifications and configuration), the PVS can meet those needs. NIST will provide the relevant calibration reports and method for analyzing the measurements.

The latest version of the standard, which has been under development for more than a decade by scientists in NIST's Thermodynamic Metrology Group, enables pressure measurements from 1 Pa to 130,000 Pa. (Air pressure at sea level is about 101,325 Pa.) It is possible to extend the instrument’s range to 370,000 Pa as needed.

The PVS combines two different kinds of low-pressure sensors enclosed in an insulated container the size of a suitcase that maintains a constant internal temperature to within 5 mK. The first is a resonance silicon gauge (RSG) that monitors two capacitance diaphragm gauges (CDGs). RSGs are microelectromechanical systems (MEMS) that measure the effect of pressure- induced strain on the resonant frequency of a silicon oscillator. CDGs measure pressure-induced changes in the position of an alloy diaphragm that serves as one plate of a capacitor, and are the workhorse sensors for most high-precision vacuum operations.

Combining the two, says project scientist Jay Hendricks, brings unique benefits: "CDGs have extremely fine resolution at low pressure. RSGs have outstanding long-term drift stability—in the range of 0.01%, which is a factor of 10 better than the CDGs. So to get the best of both, we use an RSG to calibrate the CDGs.”

Contact: Jacob Ricker, (301) 975-4475.

Any mention or image of commercial products within NIST web pages is for information only; it does not imply recommendation or endorsement by NIST.

Potential new drug for lung cancer

 

Neutron beams reveal how two potential pieces of Parkinson’s puzzle fit

 

Thu, 01/29/2015 - 11:44am

Chad Boutin, NIST

The proteins GCase (in pink) and α-syn (blue) forms a complex in cellular membrane. Neutron reflectometry (suggested by the yellow beam) revealed the structure of the complex. α-Syn shifts GCase slightly away from membrane, possibly contributing to effects related to Parkinson’s disease. Image: Alan Hoofring/NIH Medical Arts

The proteins GCase (in pink) and α-syn (blue) forms a complex in cellular membrane. Neutron reflectometry (suggested by the yellow beam) revealed the structure of the complex. α-Syn shifts GCase slightly away from membrane, possibly contributing to effects related to Parkinson’s disease. Image: Alan Hoofring/NIH Medical ArtsTo understand diseases like Parkinson’s, the tiniest of puzzles may hold big answers. That’s why a team including scientists from NIST have determined how two potentially key pieces of the Parkinson’s puzzle fit together, in an effort to reveal how the still poorly understood illness develops and affects its victims.

This puzzle is a tough one because its pieces are not only microscopic but 3-D, and can even change shape. The pieces are protein molecules whose lengthy names are abbreviated as GCase and α-syn. The two proteins wrap around each other and take on a complicated shape before attaching themselves to the membrane surface inside a neural cell in a victim’s brain. 

While much remains unknown about Parkinson’s, clues abound that the proteins’ behavior is somehow important. Parkinson’s victims have a buildup of α-syn in their cells, a possible factor in the dementia that the disease often brings. They also are far more likely to have a mutation in the gene that instructs cells to create GCase. Low levels of GCase cause another disease, Gaucher, and in some individuals suffering from both Parkinson’s and Gaucher simultaneously, Parkinson’s may appear at a younger age. 

To get a better handle on how these proteins operate in the body, the team—which also included scientists from the National Institutes of Health (NIH) and Carnegie Mellon Univ.—came to the NIST Center for Neutron Research (NCNR) to get a picture of how the two proteins combine into a single unit called a complex that interacts with cell membranes. Using techniques including neutron reflectometry, the team teased out the first-ever structural picture of the GCase/α-syn complex, including their shape change, which NIH’s Jennifer Lee says would not have been detectable by any other methods. 

“It gives us a potential interaction model of the two and structural insights in how α-syn may interfere with activity on a cell membrane,” Lee says. “An equally important contribution here is that this is the first, I believe, to look at complex formation at the membrane interface using neutron science.” 

The study still leaves many mysteries about the complex—notably how it attaches to and interacts with the membrane inside a part of the cell called the lysosome where α-syn is broken-down. The team plans to follow up with additional investigations, especially once an improved reflectometer that could offer greater resolution arrives at the NCNR in about 2017. Meanwhile, the results of the current study—and the tools that provided them—will give the team other options to explore. 

“It lays the groundwork for exploring the complex relationship between proteins that are involved in the causes of disease,” Lee says. “This will also offer a way for us to investigate other substances that would affect the interaction.”

Source: NIST

10 cool Psychology Jobs

 

Psychological disorders

 

Extreme oxygen loss in oceans accompanied past global climate change

 

Thu, 01/29/2015 - 11:58am

Kat Kerlin, UC Davis News Service

Seafloor sediment cores reveal abrupt, extensive loss of oxygen in the ocean when ice sheets melted roughly 10,000 to 17,000 years ago, according to a study from the Univ. of California, Davis. The findings provide insight into similar changes observed in the ocean today.

In the study, published in PLOS ONE, researchers analyzed marine sediment cores from different world regions to document the extent to which low oxygen zones in the ocean have expanded in the past, due to climate change.

From the subarctic Pacific to the Chilean margins, they found evidence of extreme oxygen loss stretching from the upper ocean to about 3,000 m deep. In some oceanic regions, such loss took place over a time period of 100 years or less.

“This is a global story that knits these regions together and shows that when you warm the planet rapidly, whole ocean basins can lose oxygen very abruptly and very extensively,” said lead author Sarah Moffitt, a postdoctoral scholar with the UC Davis Bodega Marine Laboratory and formerly a graduate student with the Graduate Group in Ecology. 

Marine organisms, from salmon and sardines to crab and oysters, depend on oxygen to exist. Adapting to an ocean environment with rapidly dropping oxygen levels would require a major reorganization of living things and their habitats, much as today polar species on land are retreating to higher, cooler latitudes.

The researchers chose the deglaciation period because it was a time of rising global temperatures, atmospheric carbon dioxide and sea levels—many of the global climate change signs the Earth is experiencing now.

“Our modern ocean is moving into a state that has no precedent in human history,” Moffitt said. “The potential for our oceans to look very, very different in 100 to 150 years is real. How do you use the best available science to care for these critical resources in the future? Resource managers and conservationists can use science like this to guide a thoughtful, precautionary approach to environmental management.”

Source: Univ. of California, Davis

Missing link in metal physics explains Earth’s magnetic field

 

Thu, 01/29/2015 - 9:58am

Carnegie Institute

Conception of Earth’s core overlaid by the electronic structure of iron; the width (fuzziness) of the lines results from the electron-electron scattering. Image: Ronald Cohen

Conception of Earth’s core overlaid by the electronic structure of iron; the width (fuzziness) of the lines results from the electron-electron scattering. Image: Ronald CohenEarth’s magnetic field is crucial for our existence, as it shields the life on our planet’s surface from deadly cosmic rays. It is generated by turbulent motions of liquid iron in Earth’s core. Iron is a metal, which means it can easily conduct a flow of electrons that makes up an electric current. New findings from a team including Carnegie Institute’s Ronald Cohen and Peng Zhang shows that a missing piece of the traditional theory explaining why metals become less conductive when they are heated was needed to complete the puzzle that explains this field-generating process. Their work is published in Nature.

The center of the Earth is very hot, and the flow of heat from the planet’s center towards the surface is thought to drive most of the dynamics of the Earth, ranging from volcanoes to plate tectonics. It has long been thought that heat flow drives what is called thermal convection—the hottest liquid becomes less dense and rises, as the cooler, more-dense liquid sinks—in Earth’s liquid iron core and generates Earth’s magnetic field. But recent calculations called this theory into question, launching new quests for its explanation.

In their work, Cohen and Zhang, along with Kristjan Haule of Rutgers Univ., used a new computational physics method and found that the original thermal convection theory was right all along. Their conclusion hinges on discovering that the classic theory of metals developed in the 1930’s was incomplete.

The electrons in metals, such as the iron in Earth’s core, carry current and heat. A material’s resistivity impedes this flow. The classic theory of metals explains that resistivity increases with temperature, due to atoms vibrating more as the heat rises. The theory says that at high temperatures resistivity happens when electrons in the current bounce off of vibrating atoms. These bounced electrons scatter and resist the current flow. As temperature increases, the atoms vibrate more, and increasing the scattering of bounced electrons. The electrons not only carry charge, but also carry energy, so that thermal conductivity is proportional to the electrical conductivity.

The work that had purportedly thrown the decades-old prevailing theory on the generation of Earth’s magnetic field out the window claimed that thermal convection could not drive magnetic-field generation. The calculations in those studies said that the resistivity of the molten metal in Earth’s core, which is generated by this electron scattering process, would be too low, and thus the thermal conductivity too high, to allow thermal convection to generate the magnetic field.

Cohen, Zhang and Haule’s new work shows that the cause of about half of the resistivity generated was long neglected: it arises from electrons scattering off of each other, rather than off of atomic vibrations.

“We uncovered an effect that had been hiding in plain sight for 80 years,” Cohen said. “And now the original dynamo theory works after all.”

Source: Carnegie Institute

NASA, Boeing and SpaceX outline future of commercial manned spaceflight

 

 

NASA's Stephanie Schierholz introduces the panel of NASA and commercial representatives (P...

NASA's Stephanie Schierholz introduces the panel of NASA and commercial representatives (Photo: NASA TV)

Image Gallery (4 images)

For several years, NASA and its private enterprise partners have been working on the space agency's Commercial Crew Program (CCP) to provide an astronaut ferry service from US soil to the International Space Station (ISS). Now a panel from NASA, Boeing, and SpaceX has outlined the latest timetable leading up to the first commercial flights.

The two companies were selected last year by NASA to develop privately owned and operated US spacecraft to ferry crews to the ISS.. When certified, the Boeing CST-100 and SpaceXDragon V.2 (AKA Crew Dragon) will be able to travel to and from the ISS carrying up to seven passengers or a mixture of passengers and cargo. Being capable of remaining on station for 210 days, they will double as lifeboats; allowing ISS crews to expand to seven people. According to NASA, the extra crew will allow the time available for scientific experiments to grow from 40 to 80 hours per week.

When fully deployed, Boeing and SpaceX will provide NASA with two independent systems for sending astronauts back and forth from the ISS. However, unlike during the Space race when companies would develop spacecraft in conjunction with NASA, which the latter would operate, the two companies retain ownership of the craft and are responsible for the development, launching, and operation of the vehicles, boosters, and recovery systems.

Artist's impression of the CST-100 approaching the ISS (Image: Boeing)

On Monday, NASA and its partners outlined the next phases of the CCP. Boeing will continue development of its manned capsule with a pad abort test scheduled for February 2017, an unmanned flight test in April 2017, and then a flight with a Boeing test pilot and a NASA astronaut in July 2017. Meanwhile, SpaceX has announced a pad abort test for next month, with an in-flight abort test later this year, an unmanned flight test in late 2016, and a manned flight test in early 2017.

During this process, NASA says that it is providing oversight to make certain that the two systems meet stringent safety standards, as well as the use of launch facilities at the Kennedy Space Center and the Cape Canaveral Air Force Station.

Source: NASA

 

Sprayable Sleep looks to spray away insomnia

 

 

Just a couple of sprays before bed allows the melatonin formula to soak into the skin

Just a couple of sprays before bed allows the melatonin formula to soak into the skin

Image Gallery (4 images)

A little more than a year ago, Gizmag featured an Indiegogo campaign that was marketing Sprayable Energy, which delivered a caffeine hit through the skin. Now the same company, Sprayable, has launched another campaign for a spray claimed to do the opposite – it puts you to sleep. Gizmag got a sample of Sprayable Sleep to put these claims to the test.

Sprayable Sleep utilizes melatonin, which is a chemical that humans produce naturally to control our night-day cycle. According to the company, current oral melatonin pills, which are a common administration for aiding sleep, often contain 10 to 100 times more of the chemical than is necessary. Sprayable uses a fraction of the sleep-inducing juice, which the company claims will enter the body "gently and smoothly" over time as it gradually permeates the skin. While overdosing on melatonin is incredibly unlikely, even in oral use, the company still has a disclaimer saying, "do not exceed 8 sprays in 24 hours."

"It definitely does run the risk of being a little too effective for some people," says Ben Yu, one of the co-founders of Sprayable. "Since it absorbs steadily through the skin, some people have reported that despite sleeping really well, they end up needing more sleep than they planned for, and consequently when they force themselves awake before their bodies are ready to wake up with an alarm clock, they do still feel a little groggy.”

The difference, as the company sees it, is that Sprayable Sleep emulates the body’s natural melatonin production, bypassing the digestive system and making the application more natural than taking the chemical in pill form.

Application is recommended on each side of the neck or each wrist an hour before bed

The initial application was a bit off-putting for me, as I'm not generally accustomed to sprays or colognes, but after the initial wince of uncertainty, the application process was smooth. I was pleasantly surprised that there was no stickiness to the formula, and the company claims it is odorless as well. While this was mostly true, there was a light, new-car-esque smell, though this may have been from manufacturing rather than the formula itself.

Application is recommended either on each side of the neck or each wrist an hour before bed. While this writer only experienced mildly-heavy eyes after administration, my wife was out within twenty minutes. There was little odor and the mist was easy to apply. This method of application certainly feels more natural than pills, but as can be expected, results will vary.

One bottle costs US$15, and contains enough of the insomnia killer to last one month. Sprayable Sleep already has over US$125,000 raised on the Indiegogo campaign, with 31 days still to go. If it keeps this pace, Sprayable Sleep will leave its predecessor in its wake.

 

Source: Sprayable

 

quinta-feira, 29 de janeiro de 2015

Stunning forests

 

Researchers design tailored tissue adhesives

 

 

Thu, 01/29/2015 - 8:17am

Anne Trafton, MIT News Office

 

MIT researchers have created a tissue adhesive (pictured here in red) that could be used to repair surgical incisions following colon surgery. Image: Jose-Luis Olivares/MIT

MIT researchers have created a tissue adhesive (pictured here in red) that could be used to repair surgical incisions following colon surgery. Image: Jose-Luis Olivares/MITAfter undergoing surgery to remove diseased sections of the colon, up to 30% of patients experience leakage from their sutures, which can cause life-threatening complications.

Many efforts are under way to create new tissue glues that can help seal surgical incisions and prevent such complications; now, a new study from Massachusetts Institute of Technology (MIT) reveals that the effectiveness of such glues hinges on the state of the tissue in which they are being used.

The researchers found that a sealant they had previously developed worked much differently in cancerous colon tissue than in colon tissue inflamed with colitis. The finding suggests that for this sealant or any other kind of biomaterial designed to work inside the human body, scientists must take into account the environment in which the material will be used, instead of using a “one-size fits all” approach, according to the researchers.

“This paper shows why that mentality is risky,” says Natalie Artzi, a research scientist at MIT’s Institute for Medical Science and Engineering (IMES) and senior author of a paper describing the findings in Science Translational Medicine. “We present a new paradigm by which to design and examine materials. Detailed study of tissue and biomaterial interactions can open a new chapter in precision medicine, where biomaterials are chosen and rationally designed to match specific tissue types and disease states.”

After characterizing the adhesive material’s performance in different diseased tissues, the researchers created a model that allows them to predict how it will work in different environments, opening the door to a more personalized approach to treating individual patients.

Elazer Edelman, the Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology and a member of IMES, is also a senior author of the paper. The paper’s lead authors are graduate student Nuria Oliva and former graduate student Maria Carcole.

Exploring material properties
Artzi and Edelman originally developed this tissue glue several years ago by combining two polymers—dextran (a polysaccharide) and a highly branched chain called dendrimer. In a 2009 paper, the researchers demonstrated that such adhesives work better when tailored to specific organs. In their new paper, they explored what happens when an adhesive is used in the same organ but under different disease conditions.

They show that the adhesive actually performed better in cancerous colon tissue than in healthy tissue. However, it performed worse in tissue inflamed with colitis than in healthy tissue.

Further studies of the molecular interactions between the adhesive and tissue explained those differences in behavior. The tissue glue works through a system where molecules in the adhesive serve as “keys” that interact with “locks”—chemical structures called amines found in abundance in structural tissue known as collagen.

When enough of these locks and keys bind each other, the adhesive forms a tight seal. This system is disrupted in colitic tissue because the inflammation breaks down collagen. The more severe the inflammation, the less adhesion occurs. However, cancerous tissue tends to have excess collagen, so the adhesive ends up working better than in healthy tissue.

“Now we show that adhesive-material performance is not organ-dependent, but rather, disease type and state-dependent,” says Artzi, who is also an assistant professor at Harvard Medical School.

Predicting adhesion
Using this data, the researchers created a model to help them alter the composition of the material depending on the circumstances. By changing the materials’ molecular weight, the number of keys attached to each polymer, and the ratio of the two polymers, the researchers can tune it to perform best in different types and states of tissue.

An inherent property of the adhesive is that any unused keys are absorbed back into the polymer, preventing them from causing any undesired side effects. This would allow the researchers to create two or three different versions that could cover a wide range of tissues.

“We can take a biopsy from a patient for a quick readout of disease state that would serve as an input for our model, and the output is the precise material composition that should be used to attain adequate adhesion,” Artzi says. “This exercise can be done in a clinical setting.”

Doctors have begun using this kind of personalized approach when choosing drugs that match individual patients’ genetic profiles, but it has not yet spread to the selection of biomaterials such as tissue glue. The MIT team now hopes to move the sealant into clinical trials and has founded a company to help that process along.

“It’s something that we want to do as rapidly as possible,” Edelman says. “We’re excited. It’s not often that you have a technology that is this close to clinical introduction.”

Source: Massachusetts Institute of Technology

Colorized historical photos

 

Realistically colorized historical photos

Colorized-Historical-Photos-07

Albert Einstein in Long Island, 1939 (Photo credit: Paul Edwards)

quarta-feira, 28 de janeiro de 2015

As 50 melhores fontes de 2014

 

 

50 melhores fontes-free

Snap 2015-01-29 at 05.46.57

Top five astronomical targets for your new telescope

 

 

Gizmag's top five astronomical targets for small telescopes (Photo: Shutterstock)

Gizmag's top five astronomical targets for small telescopes (Photo: Shutterstock)

Image Gallery (21 images)

If you received a telescope for Christmas, or bought one for your kids, your adventures in amateur astronomy are just beginning. Astronomy is the art and science of actually looking at the heavens and even a small telescope will let you find a host of celestial wonders. So where do you begin? Here are our suggestions for five of the most rewarding and spectacular objects with which to start your adventure in amateur astronomy ... plus some important tips on using a telescope.

What is a "small" telescope? Roughly speaking, small "beginner" telescopes have an objective (main optic) diameter between about 2.4 and 4.5 inches (60 and 115 mm). These are the scopes for which the sights of our list are intended. If you have a larger scope, your views will be that much better.

Before getting to our list of objects, here's a few tips on how to use your new telescope.

  • The single most important tip is to go easy on magnification. Some small "department store" scopes come with eyepiece combinations that will give 500X magnification or more. Using such magnification on a small scope will result in a dim, blurry, and very narrow view. This is too much magnification for the scope, for your scope's mount (vibrations), and for the atmosphere through which you are viewing (turbulence). A rule of thumb is to use magnification between about four and twenty-five times the diameter in inches (between 1 and 0.16 times the diameter in mm) of your scope's objective lens. On rare occasion you can use twice this magnification IF your mount is steady enough.
  • If you have an GOTO mount, don't expect too much of it. These mounts automatically point your scope at a selected object, but while computer pointing is a useful tool, GOTO accuracy on inexpensive scopes is not always the best. Pointing accuracy depends on the setup (usually aiming the scope at three bright stars) and the quality of the gears, which are usually plastic in inexpensive GOTO mounts. As a result, if using the GOTO capabilities of your scope, always use an eyepiece that gives your scope its lowest magnification. This gives you a large field of view, and greatly increases the chance of finding your target.
  • When observing faint objects, give your eyes a chance to adjust to the dark. After 20-30 minutes in the dark, your eyes will be thousands of times more sensitive than when you first go outside. Also, when observing very faint objects, you will see more if you look a bit to the side (averted vision). In this way the light of the object hits the more sensitive rods of the retina.
  • Another factor to keep in mind is light pollution. In crowded cities it is often difficult to see even the brightest stars owing to streetlights and car headlights. There are two main ways to combat this. The best is to take your telescope to a location that has darker skies, typically to rural areas outside of town, although suburban parks may be dark enough to observe many objects. The other approach is to at least set up your telescope in a dark area, say, in the shadow of trees or a building. The lack of direct light will allow your eyes to become dark adapted, and more able to examine the objects you want to see. If neither of these is possible, remember that the Moon and planets can be seen in almost any situation.
  • If you have trouble finding any of the objects listed below, examine the area of the sky using a pair of binoculars. All of them show up clearly in binoculars, and this gives you a quick way to get your bearings.

Now let's get down to observing! First, a safety note – NEVER point your telescope at the Sun! While watching the Sun can be done safely with a telescope, you need dedicated equipment to do so.

1. The Moon

The best object for examination with a small scope, bar none, is Earth's Moon. The Moon will fill your field of view at about 60 power, and even the smallest telescope will reveal the craters, rills, shadows, ejecta plumes, and other details on the Moon's surface.

Lunar map showing the major features of the Moon's surface (Photo: NASA)

Lunar map showing the major features of the Moon's surface (Photo: NASA)

An excellent beginner's guide for learning the Moon's surface features is the Lunar 100, a list of features in order of increasing difficulty. When you work through this list with a high-quality Lunar map, you will never again look at the Moon in the same way. My favorite Lunar feature is the dual crater Messier/Messier A, #25 on the list, the result of a glancing impact by a pair of small asteroids.

2. Jupiter

Right after darkness falls, the brightest object high in the eastern sky is Jupiter, the largest planet in the solar system.

Jupiter and its four largest moons roughly as they will appear in a small telescope (Photo...

Jupiter and its four largest moons roughly as they will appear in a small telescope (Photo: Don Stewart)

Even though Jupiter is currently about 630 million km (390 million miles) away from Earth, 40X magnification will make it appear the size of the Moon in the night sky. You should easily be able to see the bands of clouds which circle Jupiter, and Jupiter's four largest moons are easy targets that change position from night to night. The Great Red Spot has been a bit dim in recent years, so the smaller scopes will probably not show it. As a bonus, Jupiter is currently surrounded by the Hyades star cluster, a loose assortment of dozens of stars you will be able to see best using your lowest magnification.

The Pleiades photographed using a 90 mm (3.5 in) telescope (Photo: Rochus Hess)

The Pleiades photographed using a 90 mm (3.5 in) telescope (Photo: Rochus Hess)

Also, about 10 degrees north and slightly west of Jupiter lies the Pleiades star cluster, visible as a tiny "Big Dipper" without your telescope. (Your fist at the end of your outstretched arm is about 10 degrees wide.) Viewed through a telescope, dozens of stars appear in a smaller area than that covered by the Hyades. These are two of the closest star clusters to Earth, at about 150 and 300 light years, respectively.

3. Orion Nebula

Next is the sword of Orion. The Orion Nebula (also known as M42) hangs like a sword below the belt of Orion, which is high in the southern sky after dark.

Orion's belt and sword. The bright fuzzy spot on the sword is the Orion Nebula, a diffuse ...

Orion's belt and sword. The bright fuzzy spot on the sword is the Orion Nebula, a diffuse nebula that appears about twice the size of the full moon

Easily visible to the unaided eye, the Orion Nebula is a stellar nursery where stars are being born at a rapid pace, the nearest such region to Earth at a distance of about 1340 light years. The light of the Orion Nebula comes from an assortment of very hot stars within it. These stars excite some of the gas in the nebula to emit their characteristic spectral lines, while the dust in the nebula reflects their light. Even in a small scope, M42 is a delightful sight.

There is a quadruple star called the Trapezium within M42 that can be seen at a magnification of 40 or 50X. This is actually an asterism, an accidental alignment of unrelated stars, but they are unusually close together, with a spread about the same size as Jupiter. It has been suggested that an intermediate mass black hole may reside in the general vicinity of the Trapezium.

4. Andromeda and Triangulum galaxies

All of the objects above can be observed from either the northern or southern hemispheres. Now we're going to split our attention to look at a pair of galaxies for each hemisphere.

Northern Hemisphere: A little over 10 degrees southwest of the "W" shaped Cassiopeia lies an easily visible white patch in the sky. This is M31, the Andromeda galaxy.

The Andromeda galaxy, M31, also showing the satellite galaxies M32 and M110 (Photo: Adam E...

The Andromeda galaxy, M31, also showing the satellite galaxies M32 and M110 (Photo: Adam Evans)

M31 is very nearly the same size as is our galaxy, and is about 2.5 million light years distant, making it the closest large galaxy to ours. In fact it will collide with our galaxy in a mere four billion years or so. Use your lowest magnification and averted vision to trace out the fringes of the galaxy – in a larger scope and a dark sky it can be traced out to nearly three degrees across. On a good night you may see M31's satellite galaxies M32 and M110 to either side of the flat disk.

M33, the Triangulum galaxy, clearly showing the face-on spiral structure (Photo: Hewholook...

M33, the Triangulum galaxy, clearly showing the face-on spiral structure (Photo: Hewholooks)

The other Northern Hemisphere galaxy is found about 10 degrees southeast of M31, near the tiny arrow-like asterism of Triangulum. This is M33, the third largest member of the Local Group of galaxies that includes M31 and our galaxy. At a distance of about three million light years, M33 can be glimpsed in a very dark sky without optical aid, but from most observing sites binoculars allow it to be easily located. M33 is about double the size of the full moon, and again calls for low magnification and averted vision to make the most of the object. It is a spiral galaxy viewed nearly from atop the spiral, leading to interesting patterns of dark lanes in larger telescopes.

5. Large and Small Magellenic Clouds

Southern Hemisphere: At this time of year, southern viewers are fortunate to have the Milky Way's companion galaxies, the Large and Small Magellenic Clouds (LMC and SMC, respectively), in prime position for an early night's observation.

The Large and Small Magellanic Clouds. Note the enormous NGC104 globular cluster to the le...

The Large and Small Magellanic Clouds. Note the enormous NGC104 globular cluster to the left of the SMC (Photo: ESO/S. Brunier)

The LMC is visible as a faint "cloud" in the night sky of the southern hemisphere straddling the border between the constellations of Dorado and Mensa. In total it is nearly as bright as is Alpha Centauri, but is spread over a region of the sky the size of your fist, so appears far fainter. Once again this is a subject for your smallest magnification, but even so the entire LMC will not fit in your field of view. Fortunately there is a great deal of smaller structure to be seen, including the Tarantula Nebula.

The SMC is about 20 degrees to the west of the LMC. It is dimmer than the LMC, but also is smaller in extent. As a result, the surface brightness (total brightness/area) is about the same for the SMC and LMC. The SMC has several open clusters and areas of nebulosity, the brightest of which might be glimpsed using a small telescope. Examining the SMC brings an additional award in the form of the second brightest globular cluster, NGC104. NGC104 contains roughly a million stars within a 120 light year sphere. In a small telescope it appears about half the size of the Moon, and with larger magnifications (perhaps 20-25 X) some of the stars will be resolved around the edges of the cluster.

Wherever on Earth you live, your adventures in amateur astronomy are just beginning. Even your small telescope will let you find a host of celestial wonders.

Source: Astronomical League

 

Scientists give graphene one more quality – magnetism

 

 

A diagram of the magnetized graphene (Image: Shi Lab, UC Riverside)

A diagram of the magnetized graphene (Image: Shi Lab, UC Riverside)

Graphene is extremely strong for its weight, it's electrically and thermally conductive, and it's chemically stable ... but it isn't magnetic. Now, however, a team from the University of California, Riverside has succeeded in making it so. The resulting magnetized graphene could have a wide range of applications, including use in "spintronic" computer chips.

While other groups have previously magnetized graphene, they've done so by doping it with foreign substances, and the presence of these impurities has negatively affected its electronic properties. In this case, though, the graphene was able to remain pure.

Led by professor of physics and astronomy Jing Shi, the UC Riverside team laid a sheet of regular graphene down on an atomically smooth layer of magnetic yttrium iron garnet. That material then simply magnetized the graphene as it lay against it. Yttrium iron garnet was used due to the fact that certain other magnetic materials could disrupt the graphene’s electrical transport properties.

When the sheet of graphene was removed and subsequently exposed to a magnetic field, it was shown to indeed possess magnetic qualities of its own.

"This is the first time that graphene has been made magnetic this way," said Shi. "The magnetic graphene acquires new electronic properties so that new quantum phenomena can arise. These properties can lead to new electronic devices that are more robust and multi-functional."

Those devices could include improved spintronic chips, that use the spin of electrons – which can be magnetically manipulated – to store data.

A paper on the research was recently published in the journal Physical Review Letters.

Source: University of California, Riverside

 

Infant failure to thrive linked to lysosome dysfunction

 

 

Stained sections of neonatal intestines from mice without two genes that express lysosomal proteins (A and E) appeared abnormally compared to wild type mice and mice missing just one of the two genes (B-D and F-H).

Neonatal intestinal disorders that prevent infants from getting the nutrients they need may be caused by defects in the lysosomal system that occur before weaning, according to a new Northwestern Medicine study.

Lysosomes are cellular recycling centers responsible for breaking down all kinds of biological material. The study, published in PLOS Genetics, links lysosomal dysfunction with intestinal disorders for the first time, pointing to a previously unknown target for research and future therapies to help infants unable to absorb milk nutrients and gain weight, a diagnosis called failure to thrive.

"This finding highlights the critical role the lysosomal system plays in neonatal digestion," said principal author of the study Jaime García-Añoveros, associate professor in Anesthesiology, Neurology and Physiology at Northwestern University Feinberg School of Medicine. "We suspect that many neonatal intestinal pathologies -- leading causes of infant mortality worldwide -- are not appropriately understood and thus treated. The study suggests looking for lysosomal defects in these pathologies."

The investigators found that mouse models lacking certain lysosomal proteins developed an intestinal disorder that caused diarrhea and delayed growth between birth and weaning. For all mammals, including humans, intestinal digestion during this nursing period is very different from adult digestion.

"Normally food proteins break down in the stomach, and the resulting amino acids are absorbed in the intestine," García-Añoveros said. "But the stomach is not acidic yet in a nursing baby."

Instead, proteins reach the intestine undigested in infants. There, cells lining the small intestine called enterocytes degrade the proteins and absorb the amino acids using special lysosomes with digestive enzymes.

In the study, mice that lacked two lysosomal proteins normally expressed by enterocytes experienced failure to thrive symptoms throughout the nursing period. The finding suggests that infants with lysosome disorders may be affected with intestinal disorders.

Should further research identify this sort of lysosomal dysfunction in humans, infant formulas with proteins already broken down into amino acids could be a simple therapy, said García-Añoveros.

"Right now I don't think physicians are thinking of the lysosomal system as particularly important in the intestines," he said. "But we have shown that they are very prominent in digestion during this period of an infant's life.


Story Source:

The above story is based on materials provided by Northwestern University. The original article was written by Nora Dunne. Note: Materials may be edited for content and length.


Journal Reference:

  1. Natalie N. Remis, Teerawat Wiwatpanit, Andrew J. Castiglioni, Emma N. Flores, Jorge A. Cantú, Jaime García-Añoveros. Mucolipin Co-deficiency Causes Accelerated Endolysosomal Vacuolation of Enterocytes and Failure-to-Thrive from Birth to Weaning. PLoS Genetics, 2014; 10 (12): e1004833 DOI: 10.1371/journal.pgen.1004833