quarta-feira, 4 de junho de 2014

Tips To Boost Energy And Stop Feeling Tired

 

Every day millions of people complain about being tired, and it is really no wonder, what with the amount of things that we all need to get done in a day. There are so many things to do and so little time to get it done in, and we all wish that there were 25 hours in a day. Most people want to find a way to get more energy and not feel so tired, but are not sure of what they can do about it.

There are actually many simple things that you can do which will help boost your energy and stamina and not only get you going more in the morning but make you feel more energized throughout the day as well.

 

Change Your Diet

One of the first and most important things that you can do if you want to increase your energy is by changing your diet a little bit. You may think that you are eating healthily but when you take a step back and understand more about nutrition you may be pretty surprised at what you find out. White bread for instance, is bad for you and will leave you feeling bloated and sluggish. You should go for the wholemeal version as they are full of fibers. You also want to avoid fatty and sugary foods, especially since they will give you a sugar rush and make you feel more enthused to begin with but then your energy level will drop dramatically.

Perhaps more important than anything is that you make time to have breakfast every morning, as the saying that breakfast is the most important meal of the day is definitely true. By eating breakfast you will basically be giving your body a jumpstart and helping it to feel energized throughout the day. Even if you do not feel hungry you want to eat something for breakfast, even if that just means an egg or a piece of fruit.

 

Eat More Often

Rather than stuffing three large meals down every day, you should try to eat smaller meals but more frequently. You should be snacking healthily throughout each day, so that you keep your metabolism up as fast as possible and keep your body full of sustenance. Eating every two to three hours is the best idea, and this will keep your blood sugar and energy levels stable all day long. Supersized meals demand more energy to digest and in turn this may leave you feeling lethargic.

 

Fill Up On Fiber

Few people get as much fiber in their diet as they should, and fiber is actually one of the most important things to include in any diet. Fiber helps in many ways, namely by flushing toxins and wastes out of your body. These toxins and wastes, if left in your body, would cause you to feel incredibly sluggish and even possibly cause health problems and conditions to develop, such as colon cancer.

 

Use Caffeine to Your Advantage

Although too much caffeine can definitely be unhealthy, if you drink caffeinated beverages in moderation then you can really benefit from it. A cup of coffee in the morning can help make you feel more energized and is always delicious with a good hearty breakfast. Just make sure that you watch your caffeine intake during the later part of the day and into the night, because this can actually result in making you feel more tired. Especially if you are up drinking coffee most of the night you will most likely not be able to sleep and in turn will feel incredibly tired the next day.

 

Don’t Stress Out

Of course these days it seems as though there are a million things to worry about, especially if you have children, but it is important that you do not get stressed out and worry too much. Stressing out causes your brain to overload and as a result you will end up feeling tired and exhausted. Not only can stress make you tired but it can compromise your health and relationships, and so you should definitely try to stay as calm and positive as possible, even during the tougher times.

There are a few things that you can do on the day to day to keep yourself from getting overly stressed and worked up, including learning how to stop talking yourself into being upset and depressed, and how to enjoy the small things in life more. You should also learn how to stop expecting your partner to make you happy and start taking responsibility for your own happiness, as this is one of the biggest downfalls in relationships. Even chores around the house can be turned into something fun rather than something to dread, so just keep your mood light and your spirits up.

Feeling tired is never good especially if you have serious fatigue throughout the day and it is having a negative effect on your life. Just remember that there are many things you can do to get yourself feeling more energized, namely maintaining a proper and healthy diet and exercise regime.

Are your pets disturbing your sleep? You’re not alone


Rest assured, there may be a good reason you're dog-tired.

While countless pet owners peacefully sleep with a warm pet nearby, a new Mayo Clinic study, presented this week at the 29th Annual Meeting of the Associated Professional Sleep Societies, finds an increase in the number of people experiencing sleep disturbances because of their pets.

A previous Mayo Clinic study published in 2002 reported that of patients who visited the clinic's sleep center and owned pets, only one percent reported any inconvenience from their pets at night. The new study shows a larger number of patients -- 10 percent in 2013 -- reported annoyance that their pets sometimes disturbed their sleep.

"The study determined that while the majority of patients did not view their pets intolerably disturbing their sleep, a higher percentage of patients experienced irritation -- this may be related to the larger number of households with multiple pets," says Lois Krahn, M.D., Mayo Clinic psychiatrist and author of the study. "When people have these kinds of sleep problems, sleep specialists should ask about companion animals and help patients think about ways to optimize their sleep."

Between August and December 2013, 110 consecutive patients at the Mayo Clinic Center for Sleep Medicine in Arizona provided information about pets at night as part of a comprehensive sleep questionnaire. Questions covered the type and number of pets, where the animals slept, any notable behaviors and whether the patient was disturbed. The survey showed that 46 percent of the patients had pets and 42 percent of those had more than one pet. The most popular pets were dogs, cats and birds.

The disturbances by pets that patients reported included snoring, whimpering, wandering, the need to "go outside" and medical needs.

"One patient owned a parrot who consistently squawked at 6 a.m.," Dr. Krahn says. "He must have thought he was a rooster."

Courts face challenges when linking genetics to criminal behavior

 

June 4, 2014

Cell Press

Some people may be at increased risk of criminal behavior due to their genes, some say. Such research holds potential for helping judges and juries with some of the difficult decisions they must make, but it also brings a substantial risk of misinterpretation and misuse within the legal system. Experts suggest that addressing these issues will be of critical importance for upholding principles of justice and fairness.


Studies suggest that some people may be at increased risk of criminal behavior due to their genes. Such research holds potential for helping judges and juries with some of the difficult decisions they must make, but it also brings a substantial risk of misinterpretation and misuse within the legal system. Addressing these issues will be of critical importance for upholding principles of justice and fairness, according to an essay being published in the June 4 issue of the Cell Press journal Neuron.

"Genetic evidence, properly used, could assist with judgments regarding appropriate criminal punishments, causes of injury or disability, and other questions before the courts," says author Dr. Paul Appelbaum, who directs Columbia University's Center for Research on Ethical, Legal & Social Implications of Psychiatric, Neurologic & Behavioral Genetics.

Genetic evidence is being offered in criminal trials to suggest that defendants have diminished understanding of or control over their behavior, most often in arguments for mitigating sentences -- especially for defendants facing the death penalty. Genetic evidence may also play an increasing role in civil trials regarding issues such as causation of injury. For example, employers contesting work-related mental disability claims might want claimants to undergo genetic testing to prove that an underlying disorder was not responsible for their impairment.

"The complexity of genetic information and our incomplete understanding of the roots of behavior raise the possibility that genetic evidence will be misused or misunderstood. Hence, care is needed in evaluating the extent to which genetic evidence may have something to add to legal proceedings in a given case," says Dr. Appelbaum.

Moving forward, a number of questions must be addressed. For example, to what extent do specific genetic variants make it more difficult to understand or control one's behavior and what are the biological mechanisms involved? Also, how can we respond to individuals with genetic predispositions to criminal behavior to diminish the risk of recidivism?

Dr. Appelbaum notes that it will be an ongoing challenge for both legal and genetic experts to monitor the use of genetic data in the courts to ensure that the conclusions that are drawn validly reflect the science. Without such efforts, judges and juries may overestimate or underestimate the conclusions that can be drawn from genetic evidence, thus unfairly distorting the legal process.

Moving toward quality patient-centered care at cancer hospital

 

June 4, 2014

George Washington University

Patient navigation and survivorship programs were the focus of a new survey, and the challenges many of these programs face. Results indicate that nearly half of respondents had both a navigation and survivorship program at their institution. Full-time navigators had an average patient load of 100-400 patients, in various phases of the cancer continuum of screening, diagnosis, treatment, and posttreatment.


In order to meet new cancer program accreditation standards, institutions have placed new focus on patient navigation, psychosocial distress screening, and survivorship care plans. Recently published research by the George Washington University (GW) Cancer Institute found these new programs are experiencing "growing pains." The results of a nationwide survey conducted by the GW Cancer Institute and reviewed in the Journal of Oncology Navigation and Survivorship, found that health care professionals could most benefit from greater evaluation of their program's impact.

"This national study from GW Cancer Institute's Center for the Advancement of Cancer Survivorship, Navigation and Policy provides insights into current approaches and barriers for patient-centered care in practice," said Mandi Pratt-Chapman, M.A., director of the GW Cancer Institute. "These findings will help us focus our collective efforts to address identified gaps such as navigation role delineation and how to measure value of these programs."

The survey, completed by 100 healthcare providers -- many of them patient navigators and nurses -- found that nearly half of respondents had both a navigation and survivorship program at their institution. Full-time navigators had an average patient load of 100-400 patients, in various phases of the cancer continuum of screening, diagnosis, treatment, and posttreatment. Respondents identified a wide range of measurements and tracking tools, with a heavy reliance on low-tech options. The greatest challenge area identified was a lack of funding. While demonstrating value was seen as a way to sustain navigation programs, nearly 60 percent of respondents were not tracking value at all. Additionally, role clarity was cited as an important challenge.

The research found that patient navigators assisting patients during screening may be able to navigate a significantly greater number of patients annually than navigators assisting patients after diagnosis; and navigators supporting patients in treatment may need a smaller caseload than navigators supporting patients in other points of the continuum. The results also indicated a need to identify financially sustainable models for patient navigation and clinical survivorship programs, as well as a consensus on core measures.

"Our results provide critical insight into implementation practices as it relates to navigation and survivorship programs from institutions around the country," said Anne Willis, M.A., director of the division of cancer survivorship at the GW Cancer Institute. "These findings will help health care professionals who are working to create and improve programs, where little guidance is available."

Discovery of compound may open new road to diabetes treatment


Dr. Markus Seeliger and MD/PhD student Zach Foda determined the 3-D structure of an inhibitor compound and how it is bound to the Insulin Degrading Enzyme (IDE). The inhibitor is depicted in orange and white spheres. The IDE is depicted as the blue and green surface, and the gray ribbons.

The discovery of an inhibitor of the Insulin Degrading Enzyme (IDE), a protein responsible for the susceptibility of diabetes because it destroys insulin in the body, may lead to new treatment approaches for diabetes. In collaboration with the discoverers of the inhibitor, David Liu and Alan Saghatelian, Stony Brook Medicine scientist Markus Seeliger, PhD, and colleagues nationally, demonstrated the efficacy of the compound in a research paper in the early online edition of Nature.

More than 20 million people live with type II diabetes in the United States, a disease in which the body cannot make sufficient amounts of the hormone insulin. IDE removes insulin from the blood. To date, diabetes treatment strategies are based on patients either injecting insulin, taking medicine to make their body more sensitive to insulin, or taking other drugs to stimulate insulin secretion. In the paper, "Anti-diabetic action of insulin-degrading enzyme inhibitors mediated by multiple hormones," the authors reveal results that point to a potential new approach -- regulating the degradation of insulin in the blood.

"A strategy to protect the remaining amounts of insulin produced by diabetics in response to blood sugar levels is an attractive treatment alternative, particularly in the early stages of type II diabetes," said Dr. Seeliger, Assistant Professor in the Department of Pharmacological Sciences at Stony Brook University School of Medicine. "The research results gives proof of concept that targeting this protein is extremely promising. The inhibitor we discovered successfully relieved the symptoms of type II diabetes in obese mice and not only elevated their insulin levels but promoted healthy insulin signaling within the blood."

Using a robotic structural biology facility at Stony Brook and resources at Brookhaven National Laboratory, Dr. Seeliger and Zach Foda, an MD/PhD candidate and student in Dr. Seeliger's laboratory, determined the three-dimensional structure of the inhibitor compound and how it is bound to the IDE. The research team used this 3-D structure to help further evaluate the compound's properties and characteristics.

Dr. Seeliger said that the research findings are important initial steps to developing a drug that diabetics can use to regulate the depletion of insulin in their blood. A realistic initial treatment approach, he added, may be to use the compound to help the body retain insulin levels and thus delay the use of insulin for type II diabetes.

Quantum criticality observed in new class of materials

 

Quantum criticality occurs in only a few composite crystalline materials and happens at absolute zero -- the lowest possible temperature in the universe. The paucity of experimental observations of quantum criticality has left theorists wanting in their quest for evidence of possible causes.

The new finding of "quantum critical points" is in a class of iron superconductors known as "oxypnictides" (pronounced OXEE-nick-tydes). The research by physicists at Rice University, Princeton University, China's Zhejiang University and Hangzhou Normal University, France's École Polytechnique and Sweden's Linköping University appears in the journal Nature Materials.

"One of the challenges of studying quantum criticality is trying to completely classify the quantum critical points that have been observed so far," said Rice physicist Qimiao Si, a co-author of the new study. "There are indications that there's more than one type, but do we stop at two? As theorists, we are not yet at the point where we can enumerate all of the possibilities.

"Another challenge is that there are still very few materials where we can say, with certainty, that a quantum critical point exists," Si said. "There's a very strong need, on these general grounds, for extending the materials basis of quantum criticality."

In 2001, Si and colleagues advanced a theory to explain how quantum critical points could give seemingly conventional metals unconventional properties. High-temperature superconductors are one such material, and another is "heavy fermion" metals, so-called because the electrons inside them can appear to be thousands of times more massive than normal.

Heavy fermion metals are prototype systems for quantum criticality. When these metals reach their quantum critical point, the electrons within them act in unison and the effects of even one electron moving through the system have widespread results throughout. This is very different from the electron interactions in a common wiring material like copper. It is these collective effects that have increasingly convinced physicists of a possible link between superconductivity and quantum criticality.

"The quantum critical point is the point at which a material undergoes a transition from one phase to another at absolute zero," said Si, Rice's Harry C. and Olga K. Wiess Professor of Physics and Astronomy. "Unlike the classical phase transition of ice melting into water, which occurs when heat is provided to the system, the quantum phase transition results from quantum-mechanical forces. The effects are so powerful that they can be detected throughout the space inside the system and over a long time."

To observe quantum critical points in the lab, physicists cool their samples -- be they heavy fermion metals or high-temperature superconductors -- to extremely cold temperatures. Though it is impossible to chill anything to absolute zero, physicists can drive the phase transition temperatures to attainable low temperatures by applying pressure, magnetic fields or by "doping" the samples to slightly alter the spacing between atoms.

Si and colleagues have been at the forefront of studying quantum critical points for more than a decade. In 2003, they developed the first thermodynamic method for systematically measuring and classifying quantum critical points. In 2004 and again in 2007, they used tests on heavy fermion metals to show how the quantum critical phenomena violated the standard theory of metals -- Landau's Fermi-liquid theory.

In 2008, following the groundbreaking discovery of iron-based pnictide superconductors in Japan and China, Si and colleagues advanced the first theory that explained how superconductivity develops out of a bad-metal normal state in terms of magnetic quantum fluctuations. Also that year, Si co-founded the International Collaborative Center on Quantum Matter (ICC-QM), a joint effort by Rice, Zhejiang University, the London Centre for Nanotechnology and the Max Planck Institute for Chemical Physics of Solids in Dresden, Germany.

In 2009, Si and co-authors offered a theoretical framework to predict how the pnictides would behave at or near a quantum critical point. Several of these predictions were borne out in a series of studies the following year.

In the current Nature Materials study, Si and ICC-QM colleagues Zhu'an Xu, an experimentalist at Zhejiang, and Jianhui Dai, a theorist at Hangzhou, worked with Antoine Georges of École Polytechnique, Nai Phuan Ong of Princeton and others to look for evidence of quantum critical points in an iron-based heavy fermion metallic compound made of cerium, nickel, arsenic and oxygen. The material is related to the family of iron-based pnictide superconductors.

"Heavy fermions are the canonical system for the in-depth study of quantum criticality," Si said. "We have considered heavy fermion physics in the iron pnictides before, but in those compounds the electrons of the iron elements are ordered in such a way that it makes it more difficult to precisely study quantum criticality.

"The compound that we studied here is the first one among the pnictide family that turned out to feature clear-cut heavy fermion physics. That was a pleasant surprise for me," Si said.

Through measurements of electrical transport properties in the presence of a magnetic field, the study provided evidence that the quantum critical point belongs to an unconventional type proposed in the 2001 work of Si and colleagues.

"Our work in this new heavy fermion pnictide suggests that the type of quantum critical point that has been theoretically advanced is robust," Si said. "This bodes well with the notion that quantum criticality can eventually be classified."

He said it is important to note that other homologues -- similar iron-based materials -- may now be studied to look for quantum critical points.

"Our results imply that the enormous materials basis for the oxypnictides, which has been so crucial to the search for high-temperature superconductivity, will also play a vital role in the effort to establish the universality classes of quantum criticality," Si said.

Additional co-authors include Yongkang Lou, Yuke Li, Chunmu Feng and Guanghan Cao, all of Zhejiang University; Leonid Pourovskii of both École Polytechnique and Linköping University; and S.E. Rowley of Princeton University.

The research was supported by the National Basic Research Program of China, the National Science Foundation of China, the NSF of Zhejiang Province, the Fundamental Research Funds for the Central Universities of China, the National Science Foundation, the Nano Electronics Research Corporation, the Robert A. Welch Foundation, the China Scholarship Council and the Swedish National Infrastructure for Computing.

Emotion drives customers to use smartphones with bigger screen


Bigger smartphone screen size may be better for more than just practical reasons, according to researchers. Participants in a study on smartphones indicated that emotional reasons might influence their decision to buy smartphones with bigger screens even more than practical ones, said S. Shyam Sundar, Distinguished Professor of Communications and co-director of the Media Effects Research Laboratory, Penn State.

"There are basically two different reasons that 'bigger is better' for screen size: utilitarian reasons and affective, or emotional, reasons," said Sundar. "There are so many things on smartphones that we can use, but an even more powerful factor of the larger screen is its hedonic aspect -- how attractive and pleasing it is to users."

People may find bigger screens more emotionally satisfying because they are using smartphones for entertainment, as well as for communication purposes.

"The screen size has increased the bandwidth of user interactions on smartphones, making it more than a talking-texting device," said Sundar. "With high definition screens, people now can watch television and movies, as well as multi-task, something that wasn't possible in earlier smartphone versions."

The desire for larger screens that drives purchases of other entertainment devices may be part of the impulse to buy smartphones with larger screens, said Ki Joon Kim, adjunct professor in the department of interaction science, Sungkyunkwan University, Korea, who worked with Sundar on the study.

"Large-screen TVs and monitors are known to have positive effects on user experience," said Kim. "Our study found that the same applies to the mobile context as well."

Smartphone engineers cannot increase the size of the screen continually or the mobile device will become inconvenient to carry.

"We have not reached the point where the screen is too big yet, and I believe there may be some room for expansion of the screen size," said Sundar. "Finding the idea size is something that I'm sure industry engineers and designers are working to find." Kim said that this appeal to both practical and emotional needs of users may indicate that smartphones can handle the merging of various forms of media.

"Smartphones serve both utilitarian and hedonic purposes," Kim said. "They are convergent media." The researchers randomly assigned smartphones with two different screen sizes to 130 university students. One of the phones had a 3.7-inch screen and the other had a 5.3-inch screen size. They then asked the participants to visit a mobile website and find the departure time for a shuttle bus.

Participants were then asked to fill out a questionnaire on their experience using the smartphone. The questions included ones on how they used the device, such as whether or not the smartphone helped them locate the bus schedule, and on how they felt about using the phone, such as whether or not they felt excited about using it.

NASA should maintain long-term focus on Mars as 'horizon goal' for human spaceflight


Arguing for a continuation of the nation's human space exploration program, a new congressionally mandated report from the National Research Council concludes that the expense of human spaceflight and the dangers to the astronauts involved can be justified only by the goal of putting humans on other worlds.The report recommends that the nation pursue a disciplined "pathway" approach that encompasses executing a specific sequence of intermediate accomplishments and destinations leading to the "horizon goal" of putting humans on Mars.The success of this approach would require a steadfast commitment to a consensus goal, international collaboration, and a budget that increases by more than the rate of inflation.

"The United States has been a leader in human space exploration for more than five decades, and our efforts in low Earth orbit with our partners are approaching maturity with the completion of the International Space Station.We as a nation must decide now how to embark on human space exploration beyond low Earth orbit in a sustainable fashion," said Jonathan Lunine, director of the Center for Radiophysics and Space Research at Cornell University and co-chair of the committee that wrote the report.

"The technical analysis completed for this study shows that for the foreseeable future, the only feasible destinations for human exploration are the moon, asteroids, Mars, and the moons of Mars," Lunine added."Among this small set of plausible goals, the most distant and difficult is putting human boots on the surface of Mars, thus that is the horizon goal for human space exploration.All long-range space programs by our potential partners converge on this goal."

Public opinion of the space program since its inception has been generally positive, but the report found that most of the public does not pay much attention to or feel well-informed about the topic and spending on space exploration is not a high priority for most of the public.Support for increased funding is highest among those who are interested in and well-informed about human spaceflight. The committee conducted its own survey of stakeholders (defined as those who may reasonably be expected to have an interest in NASA programs and be able to exert some influence over its direction) and scientists in non-space-related fields. In both the public and stakeholder opinion data, the committee found there was no majority agreement on a single rationale for human spaceflight.

Historically, rationales used to justify a human spaceflight program have included economic benefits, national security, national stature and international relations, inspiration for science and engineering education, contributions to science and knowledge, a shared human destiny and urge to explore, and the eventual survival of the human species -- the report defines the latter two as "aspirational."

The committee concluded that although no single rationale, either practical or aspirational, seems to justify the value of pursuing human spaceflight, the aspirational rationales, when supplemented by practical benefits associated with the pragmatic rationales, argue for the continuation of a U.S. human spaceflight program, provided that the program adopts a stable and sustainable pathways approach. The aspirational rationales are also most in line with enduring questions the report identifies as motivating human spaceflight: How far from Earth can humans go? and What can humans discover and achieve when we get there?

"Human space exploration remains vital to the national interest for inspirational and aspirational reasons that appeal to a broad range of U.S. citizens," said Purdue University president, former Governor of Indiana, and committee co-chair Mitchell E. Daniels, Jr."But given the expense of any human spaceflight program and the significant risk to the crews involved, in our view the only pathways that fit these criteria are those that ultimately place humans on other worlds."

The report evaluates three different pathways to illustrate the trade-offs among affordability, schedule, developmental risk, and the frequency of missions for different sequences of intermediate destinations.All the pathways culminate in landing on the surface of Mars -- which is the most challenging yet technically feasible destination -- and have anywhere between three and six steps that include some combination of missions to asteroids, the moon, and Martian moons.

The report proposes a set of principles and decision rules by which national leadership might decide on a given pathway, measure its progress, navigate moving off one pathway to another, or cease the endeavor altogether. While the committee was not asked to recommend a particular pathway to pursue, it found that a return to extended surface operations on the moon would make significant contributions to a strategy ultimately aimed at landing people on Mars, and that it would also likely provide a broad array of opportunities for international and commercial cooperation.

Completing any of the described pathways requires the development of a number of mission elements and technological capabilities.The report identifies 10 high-priority capabilities that should be addressed by current research and development activities, with a particular emphasis on Mars entry, descent, and landing, radiation safety, and in-space propulsion and power. These three capabilities will be the most difficult to develop in terms of costs, schedule, technical challenges, and gaps between current and needed abilities, the report says.

Progress in human space exploration beyond low Earth orbit will be measured in decades and hundreds of billions of dollars.Although the report does not make any particular budget recommendations, it notes that there are no viable pathways to Mars under the current flat or even an inflation-adjusted budget.The analysis does show that increasing NASA's human spaceflight budget by 5 percent per year, for example, would enable pathways with viable mission frequency and greatly reduce technical, cost, and schedule risks.

"Our committee concluded that any human exploration program will only succeed if it is appropriately funded and receives a sustained commitment on the part of those who govern our nation.That commitment cannot change direction election after election.Our elected leaders are the critical enablers of the nation's investment in human spaceflight, and only they can assure that the leadership, personnel, governance, and resources are in place in our human exploration program," Daniels said.

A Felicidade

 

A Felicidade segundo D Carnegie

Security: Computer scientists develop tool to make the Internet of Things safer

 

June 2, 2014

University of California - San Diego

Computer scientists have developed a tool that allows hardware designers and system builders to test security -- a first for the field. There is a big push to create the so-called Internet of Things, where all devices are connected and communicate with one another. As a result, embedded systems -- small computer systems built around microcontrollers -- are becoming more common. But they remain vulnerable to security breaches. Some examples of devices that may be hackable: medical devices, cars, cell phones and smart grid technology.


From left: Ph.D. student Jason Oberg, computer science professor Ryan Kastner and postdoctoral research Jonathan Valamehr.

Computer scientists at the University of California, San Diego, have developed a tool that allows hardware designers and system builders to test security- a first for the field. One of the tool's potential uses is described in the May-June issue of IEEE Micro magazine.

"The stakes in hardware security are high," said Ryan Kastner, a professor of computer science at the Jacobs School of Engineering at UC San Diego.

There is a big push to create the so-called Internet of Things, where all devices are connected and communicate with one another. As a result, embedded systems -- small computer systems built around microcontrollers -- are becoming more common. But they remain vulnerable to security breaches. Some examples of devices that may be hackable: medical devices, cars, cell phones and smart grid technology.

"Engineers traditionally design devices to be fast and use as little power as possible," said Jonathan Valamehr, a postdoctoral researcher in the Department of Computer Science and Engineering at UC San Diego. "Oftentimes, they don't design them with security in mind."

The tool, based on the team's research on Gate-level Information Flow Tracking, or GLIFT, tags critical pieces in a hardware's security system and tracks them. The tool leverages this technology to detect security-specific properties within a hardware system. For example, the tool can make sure that a cryptographic key does not leak outside a chip's cryptographic core.

There are two main threats in hardware security. The first is confidentiality. In some types of hardware, one can determine a device's cryptographic key based on the amount of time it takes to encrypt information. The tool can detect these so-called timing channels that can compromise a device's security. The second threat is integrity, where a critical subsystem within a device can be affected by non-critical ones. For example, a car's brakes can be affected by its CD player. The tool can detect these integrity violations as well.

Valamehr, Kastner, and Ph.D. candidate Jason Oberg started a company named Tortuga Logic to commercialize this technology. The company is currently working with two of the top semiconductor companies in the world. Their next step is to focus on medical devices, computers in cars, and military applications.

The team recently were awarded a $150,000 grant from the National Science Foundation to grow their business and further their research.


Story Source:

The above story is based on materials provided by University of California - San Diego. Note: Materials may be edited for content and length.

How an Amateur Built the World's Biggest Dome

 

https://www.youtube.com/watch?v=_IOPlGPQPuM

Modern castle

filipupdated02

Proteins 'ring like bells': Quantum mechanics and biochemical reactions

 

June 3, 2014

University of Glasgow

As far back as 1948, Erwin Schrödinger -- the inventor of modern quantum mechanics -- published the book 'What is life?' In it, he suggested that quantum mechanics and coherent ringing might be at the basis of all biochemical reactions. At the time, this idea never found wide acceptance because it was generally assumed that vibrations in protein molecules would be too rapidly damped. Now, scientists have shown that he may have been on the right track after all.


Dr David Turton, the ultrafast laser expert who carried out the laser experiments.

As far back as 1948, Erwin Schrödinger -- the inventor of modern quantum mechanics -- published the book "What is life?"

In it, he suggested that quantum mechanics and coherent ringing might be at the basis of all biochemical reactions. At the time, this idea never found wide acceptance because it was generally assumed that vibrations in protein molecules would be too rapidly damped.

Now, scientists at the University of Glasgow have demonstrated he was on the right track after all.

Using modern laser spectroscopy, the scientists have been able to measure the vibrational spectrum of the enzyme lysozyme, a protein that fights off bacteria. They discovered that this enzyme rings like a bell with a frequency of a few terahertz or a million-million hertz. Most remarkably, the ringing involves the entire protein, meaning the ringing motion could be responsible for the transfer of energy across proteins.

The experiments show that the ringing motion lasts for only a picosecond or one millionth of a millionth of a second. Biochemical reactions take place on a picosecond timescale and the scientists believe that evolution has optimised enzymes to ring for just the right amount of time. Any shorter, and biochemical reactions would become inefficient as energy is drained from the system too quickly. Any longer and the enzyme would simple oscillate forever: react, unreact, react, unreact, etc. The picosecond ringing time is just perfect for the most efficient reaction.

These tiny motions enable proteins to morph quickly so they can readily bind with other molecules, a process that is necessary for life to perform critical biological functions like absorbing oxygen and repairing cells.

The findings have been published in Nature Communications.

Klaas Wynne, Chair in Chemical Physics at the University of Glasgow said: "This research shows us that proteins have mechanical properties that are highly unexpected and geared towards maximising efficiency. Future work will show whether these mechanical properties can be used to understand the function of complex living systems."


Story Source:

The above story is based on materials provided by University of Glasgow. Note: Materials may be edited for content and length.


Journal Reference:

  1. David A. Turton, Hans Martin Senn, Thomas Harwood, Adrian J. Lapthorn, Elizabeth M. Ellis, Klaas Wynne. Terahertz underdamped vibrational motion governs protein-ligand binding in solution. Nature Communications, 2014; 5 DOI: 10.1038/ncomms4999

Vanishing da Vinci: Nondestructive way to determine state of degradation of ancient works of art

 


This is Leonardo da Vinci's self-portrait as acquired during diagnostic studies carried out at the Central Institute for the Restoration of Archival and Library Heritage in Rome, Italy.

One of Leonardo da Vinci's masterpieces, drawn in red chalk on paper during the early 1500s and widely believed to be a self-portrait, is in extremely poor condition. Centuries of exposure to humid storage conditions or a closed environment has led to widespread and localized yellowing and browning of the paper, which is reducing the contrast between the colors of chalk and paper and substantially diminishing the visibility of the drawing.

A group of researchers from Italy and Poland with expertise in paper degradation mechanisms was tasked with determining whether the degradation process has now slowed with appropriate conservation conditions -- or if the aging process is continuing at an unacceptable rate.

To do this, as they describe in Applied Physics Letters, from AIP Publishing, the team developed an approach to nondestructively identify and quantify the concentration of light-absorbing molecules known as chromophores in ancient paper, the culprit behind the "yellowing" of the cellulose within ancient documents and works of art.

"During the centuries, the combined actions of light, heat, moisture, metallic and acidic impurities, and pollutant gases modify the white color of ancient paper's main component: cellulose," explained Joanna Łojewska, a professor in the Department of Chemistry at Jagiellonian University in Krakow, Poland. "This phenomenon is known as 'yellowing,' which causes severe damage and negatively affects the aesthetic enjoyment of ancient art works on paper."

Chromophores are the key to understanding the visual degradation process because they are among the chemical products developed by oxidation during aging and are, ultimately, behind the "yellowing" within cellulose. Yellowing occurs when "chromophores within cellulose absorb the violet and blue range of visible light and largely scatter the yellow and red portions -- resulting in the characteristic yellow-brown hue," said Olivia Pulci, a professor in the Physics Department at the University of Rome Tor Vergata.

To determine the degradation rate of Leonardo's self-portrait, the team created a nondestructive approach that centers on identifying and quantifying the concentration of chromophores within paper. It involves using a reflectance spectroscopy setup to obtain optical reflectance spectra of paper samples in the near-infrared, visible, and near-ultraviolet wavelength ranges.

Once reflectance data is gathered, the optical absorption spectrum of cellulose fibers that form the sheet of paper can be calculated using special spectroscopic data analysis.

Then, computational simulations based on quantum mechanics -- in particular, Time-Dependent Density Functional Theory, which plays a key role in studying optical properties in theoretical condensed matter physics -- are tapped to calculate the optical absorption spectrum of chromophores in cellulose.

"Using our approach, we were able to evaluate the state of degradation of Leonardo da Vinci's self-portrait and other paper specimens from ancient books dating from the 15th century," said Adriano Mosca Conte, a researcher at the University of Rome Tor Vergata. "By comparing the results of ancient papers with those of artificially aged samples, we gained significant insights into the environmental conditions in which Leonardo da Vinci's self-portrait was stored during its lifetime."

Their work revealed that the type of chromophores present in Leonardo's self portrait are "similar to those found in ancient and modern paper samples aged in extremely humid conditions or within a closed environment, which agrees with its documented history," said Mauro Missori, a researcher at the Institute for Complex Systems, CNR, in Rome, Italy.

One of the most significant implications of their work is that the state of degradation of ancient paper can be measured and quantified by evaluation of the concentrations of chromophores in cellulose fibers. "The periodic repetition of our approach is fundamental to establishing the formation rate of chromophores within the self-portrait. Now our approach can serve as a precious tool to preserve and save not only this invaluable work of art, but others as well," Conte noted.

 


Story Source:

The above story is based on materials provided by American Institute of Physics (AIP). Note: Materials may be edited for content and length.


Journal Reference:

  1. A. Mosca Conte, O. Pulci, M.C. Misiti, J. Łojewska, L. Teodonio, C. Violante, and M. Missori. Visual degradation in Leonardo da Vinci's iconic self-portrait: a nanoscale study. Applied Physics Letters, 2014 DOI: 10.1063/1.4879838

Spiders know the meaning of web music

 


Silk during high-rate ballistic impact.

Spider silk transmits vibrations across a wide range of frequencies so that, when plucked like a guitar string, its sound carries information about prey, mates, and even the structural integrity of a web.

The discovery was made by researchers from the Universities of Oxford, Strathclyde, and Sheffield who fired bullets and lasers at spider silk to study how it vibrates. They found that, uniquely, when compared to other materials, spider silk can be tuned to a wide range of harmonics. The findings, to be reported in the journal Advanced Materials, not only reveal more about spiders but could also inspire a wide range of new technologies, such as tiny light-weight sensors.

"Most spiders have poor eyesight and rely almost exclusively on the vibration of the silk in their web for sensory information," said Beth Mortimer of the Oxford Silk Group at Oxford University, who led the research. "The sound of silk can tell them what type of meal is entangled in their net and about the intentions and quality of a prospective mate. By plucking the silk like a guitar string and listening to the 'echoes' the spider can also assess the condition of its web."

This quality is used by the spider in its web by 'tuning' the silk: controlling and adjusting both the inherent properties of the silk, and the tensions and interconnectivities of the silk threads that make up the web. To study the sonic properties of the spider's gossamer threads the researchers used ultra-high-speed cameras to film the threads as they responded to the impact of bullets. In addition, lasers were used to make detailed measurements of even the smallest vibration.

"The fact that spiders can receive these nanometre vibrations with organs on each of their legs, called slit sensillae, really exemplifies the impact of our research about silk properties found in our study," said Dr Shira Gordon of the University of Strathclyde, an author involved in this research.

"These findings further demonstrate the outstanding properties of many spider silks that are able to combine exceptional toughness with the ability to transfer delicate information,' said Professor Fritz Vollrath of the Oxford Silk Group at Oxford University, an author of the paper. 'These are traits that would be very useful in light-weight engineering and might lead to novel, built-in 'intelligent' sensors and actuators."

Dr Chris Holland of the University of Sheffield, an author of the paper, said: "Spider silks are well known for their impressive mechanical properties, but the vibrational properties have been relatively overlooked and now we find that they are also an awesome communication tool. Yet again spiders continue to impress us in more ways than we can imagine."

Beth Mortimer said: "It may even be that spiders set out to make a web that 'sounds right' as its sonic properties are intimately related to factors such as strength and flexibility."

Editor's Note: A video about the research is available on YouTube at http://www.youtube.com/watch?v=Kjh7bQSc8ag

Quest for the bionic arm: Advancements and challenges

 


In the past 13 years, nearly 2,000 veterans returned from Iraq and Afghanistan with injuries requiring amputations; 14 percent of those injured veterans required upper extremity amputations. To treat veterans with upper extremity amputations, scientists continue to pursue research and development of bionic arms and hands with full motor and sensory function. An article appearing in the June issue of the Journal of the American Academy of Orthopaedic Surgeons (JAAOS) reviews the recent advancements in upper extremity bionics and the challenges that remain in creating a prosthesis that meets or exceeds the abilities of a human arm and hand.

During the next 50 years, "I truly believe we will be able to make artificial arms that function better than many injured arms that doctors are saving today," said article author Douglas T. Hutchinson, MD, associate professor of orthopaedics at the University of Utah Medical School, and chief of hand surgery at Primary Children's Medical Center, the Veterans Affairs Medical Center, and Shriners Intermountain Hospital. Advancements in prostheses technology will not only benefit injured veterans but also, eventually, the civilian population with upper extremity injuries that require amputations.

One of the most commonly used upper extremity prostheses continues to be the myoelectric prosthesis. Created more than 50 years ago, this prosthesis allows residual muscles to act as natural batteries to create transcutaneous signals (transmitted through the skin) to control the movements of the prosthetic arm and hand. However, the muscles used most often are the biceps and triceps, which do not naturally translate to the opening and closing of a hand. In addition, myoelectric prosthetics do not look natural and are heavy, hot, uncomfortable, and not waterproof. Sometimes the socket interface used to attach the prosthesis may interfere with the function of a residual joint such as the elbow.

Because of these challenges -- as well as the inability to "feel" the prosthesis -- the wearer never achieves fine motor control, the simultaneous use of multiple joints, or full rotation and use of the hand. The prosthesis also requires a long period of learning and adjustment. As a result, only about two thirds of patients properly fitted for upper extremity prosthesis use it daily, with many patients instead choosing to wear a body-operated "hook" device, invented during the Civil War and refined during World War I and World War II. Others choose not to use their prostheses because they prefer the ability to have physical sensation from their stump.

The 2014 federal budget for prostheses research alone is $2.5 billion. The U.S. Department of Defense Advanced Research Project (DARPA) has already invested more than $150 million into their Revolutionizing Prosthetics Program, which is charged with creating an upper extremity prosthesis that will function as a normal human arm does, complete with full motor and sensory functions. According to Dr. Hutchinson, the program has created several advanced upper extremity prostheses, "providing function and ease of learning superior to those of conventional myoelectric prostheses."

However, these prosthetic devices have a long way to go for effective and broad use in patients. Many are heavy and uncomfortable with short-life batteries. Current infection rates with osseous-integrated devices at the prosthesis-skin interface also remain high at approximately 45 percent. Most challenging is the problem of efficiently and accurately sending brain signals through the muscles and peripheral nerves of the arm and hands, which may require the creation and use of a reliable wireless device or direct wiring through an osseous (bone tissue)-integrated implant.

Answers may be found in combining recent advancements in prosthetic devices with breakthroughs in maintaining nerve and muscle function in badly damaged limbs.

"Orthopaedic surgeons who do peripheral nerve surgery (hand surgeons) will be part of the team that puts these devices into patients, but perhaps more relevant than that will be the way we treat severe near amputations or complete amputations differently," said Dr. Hutchinson. "In an amputation surgery, we will need to preserve muscles and nerves even more than we already do to make this type of later reconstruction more successful."

"We currently spend a lot of time, energy, and money saving hands and arms that truly have a poor prognosis because the alternative, an amputation and an insensate myoelectric prosthesis attached by a socket, is even worse," Dr. Hutchinson added. "As we improve the prosthesis, the options for these severely injured upper extremities will increase."

In addition, the perfection of nerve utilization could potentially aid other conditions, such as cerebral palsy, chronic nerve pain, and brachial plexus injuries.


Story Source:

The above story is based on materials provided by American Academy of Orthopaedic Surgeons. Note: Materials may be edited for content and length.


Journal Reference:

  1. D. T. Hutchinson. The Quest for the Bionic Arm. Journal of the American Academy of Orthopaedic Surgeons, 2014; 22 (6): 346 DOI: 10.5435/JAAOS-22-06-346

Carbon-capture breakthrough: Recyclable material absorbs 82 percent of its weight in carbon dioxide

 


Particles of nitrogen-containing porous carbon are able to capture carbon dioxide from natural gas under pressure at a wellhead by polymerizing it, according to researchers at Rice University. When the pressure is released, the carbon dioxide returns to gaseous form.

Rice University scientists have created an Earth-friendly way to separate carbon dioxide from natural gas at wellheads.

A porous material invented by the Rice lab of chemist James Tour sequesters carbon dioxide, a greenhouse gas, at ambient temperature with pressure provided by the wellhead and lets it go once the pressure is released. The material shows promise to replace more costly and energy-intensive processes.

Results from the research appear today in the journal Nature Communications.

Natural gas is the cleanest fossil fuel. Development of cost-effective means to separate carbon dioxide during the production process will improve this advantage over other fossil fuels and enable the economic production of gas resources with higher carbon dioxide content that would be too costly to recover using current carbon capture technologies, Tour said. Traditionally, carbon dioxide has been removed from natural gas to meet pipelines' specifications.

The Tour lab, with assistance from the National Institute of Standards and Technology (NIST), produced the patented material that pulls only carbon dioxide molecules from flowing natural gas and polymerizes them while under pressure naturally provided by the well.

When the pressure is released, the carbon dioxide spontaneously depolymerizes and frees the sorbent material to collect more.

All of this works in ambient temperatures, unlike current high-temperature capture technologies that use up a significant portion of the energy being produced.

"If the oil and gas industry does not respond to concerns about carbon dioxide and other emissions, it could well face new regulations," Tour said, noting the White House issued its latest National Climate Assessment last month and, this week, set new rules to cut carbon pollution from the nation's power plants.

"Our technique allows one to specifically remove carbon dioxide at the source. It doesn't have to be transported to a collection station to do the separation," he said. "This will be especially effective offshore, where the footprint of traditional methods that involve scrubbing towers or membranes are too cumbersome.

"This will enable companies to pump carbon dioxide directly back downhole, where it's been for millions of years, or use it for enhanced oil recovery to further the release of oil and natural gas. Or they can package and sell it for other industrial applications," he said.

The Rice material, a nanoporous solid of carbon with nitrogen or sulfur, is inexpensive and simple to produce compared with the liquid amine-based scrubbers used now, Tour said. "Amines are corrosive and hard on equipment," he said. "They do capture carbon dioxide, but they need to be heated to about 140 degrees Celsius to release it for permanent storage. That's a terrible waste of energy."

Rice graduate student Chih-Chau Hwang, lead author of the paper, first tried to combine amines with porous carbon. "But I still needed to heat it to break the covalent bonds between the amine and carbon dioxide molecules," he said. Hwang also considered metal oxide frameworks that trap carbon dioxide molecules, but they had the unfortunate side effect of capturing the desired methane as well and they are far too expensive to make for this application.

The porous carbon powder he settled on has massive surface area and turns the neat trick of converting gaseous carbon dioxide into solid polymer chains that nestle in the pores.

"Nobody's ever seen a mechanism like this," Tour said. "You've got to have that nucleophile (the sulfur or nitrogen atoms) to start the polymerization reaction. This would never work on simple activated carbon; the key is that the polymer forms and provides continuous selectivity for carbon dioxide."

Methane, ethane and propane molecules that make up natural gas may try to stick to the carbon, but the growing polymer chains simply push them off, he said.

The researchers treated their carbon source with potassium hydroxide at 600 degrees Celsius to produce the powders with either sulfur or nitrogen atoms evenly distributed through the resulting porous material. The sulfur-infused powder performed best, absorbing 82 percent of its weight in carbon dioxide. The nitrogen-infused powder was nearly as good and improved with further processing.

Tour said the material did not degrade over many cycles, "and my guess is we won't see any. After heating it to 600 degrees C for the one-step synthesis from inexpensive industrial polymers, the final carbon material has a surface area of 2,500 square meters per gram, and it is enormously robust and extremely stable."

Apache Corp., a Houston-based oil and gas exploration and production company, funded the research at Rice and licensed the technology. Tour expected it will take time and more work on manufacturing and engineering aspects to commercialize.

The paper's co-authors are undergraduate Josiah Tour, research scientist Carter Kittrell and senior research scientist Lawrence Alemany, all of Rice, and Laura Espinal, an associate at NIST. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of mechanical engineering and nanoengineering and of computer science.

Hubble unveils new colorful view of the universe

 


Astronomers using NASA's Hubble Space Telescope have assembled a comprehensive picture of the evolving universe -- among the most colorful deep space images ever captured by the 24-year-old telescope.

Astronomers using NASA's Hubble Space Telescope have assembled a comprehensive picture of the evolving universe -- among the most colorful deep space images ever captured by the 24-year-old telescope.

Researchers say the image, from a new study called the Ultraviolet Coverage of the Hubble Ultra Deep Field, provides the missing link in star formation. The Hubble Ultra Deep Field 2014 image is a composite of separate exposures taken in 2003 to 2012 with Hubble's Advanced Camera for Surveys and Wide Field Camera 3.

Astronomers previously studied the Hubble Ultra Deep Field (HUDF) in visible and near-infrared light in a series of images captured from 2003 to 2009. The HUDF shows a small section of space in the southern-hemisphere constellation Fornax. Now, using ultraviolet light, astronomers have combined the full range of colors available to Hubble, stretching all the way from ultraviolet to near-infrared light. The resulting image -- made from 841 orbits of telescope viewing time -- contains approximately 10,000 galaxies, extending back in time to within a few hundred million years of the big bang.

Prior to the Ultraviolet Coverage of the Hubble Ultra Deep Field study of the universe, astronomers were in a curious position. Missions such as NASA's Galaxy Evolution Explorer (GALEX) observatory, which operated from 2003 to 2013, provided significant knowledge of star formation in nearby galaxies. Using Hubble's near-infrared capability, researchers also studied star birth in the most distant galaxies, which appear to us in their most primitive stages due to the significant amount of time required for the light of distant stars to travel into a visible range. But for the period in between, when most of the stars in the universe were born -- a distance extending from about 5 billion to 10 billion light-years -- they did not have enough data.

"The lack of information from ultraviolet light made studying galaxies in the HUDF like trying to understand the history of families without knowing about the grade-school children," said principal investigator Harry Teplitz of Caltech in Pasadena, California. "The addition of the ultraviolet fills in this missing range."

Ultraviolet light comes from the hottest, largest, and youngest stars. By observing at these wavelengths, researchers get a direct look at which galaxies are forming stars and where the stars are forming within those galaxies.

Studying the ultraviolet images of galaxies in this intermediate time period enables astronomers to understand how galaxies grew in size by forming small collections of very hot stars. Because Earth's atmosphere filters most ultraviolet light, this work can only be accomplished with a space-based telescope.

"Ultraviolet surveys like this one using the unique capability of Hubble are incredibly important in planning for NASA's James Webb Space Telescope," said team member Rogier Windhorst of Arizona State University in Tempe. "Hubble provides an invaluable ultraviolet-light dataset that researchers will need to combine with infrared data from Webb. This is the first really deep ultraviolet image to show the power of that combination."


Story Source:

The above story is based on materials provided by Space Telescope Science Institute (STScI). Note: Materials may be edited for content and length.

Progress on detecting glucose levels in saliva: New biochip sensor

 


A plasmonic interferometer can detect glucose molecules in water. Detection of glucose in a complex fluid is more challenging. Controlling the distance between grooves and using dye chemistry on glucose molecules allows researchers to measure glucose levels despite the 1 percent of saliva that is not water.

Researchers from Brown University have developed a new biochip sensor that can selectively measure concentrations of glucose in a complex solution similar to human saliva. The advance is an important step toward a device that would enable people with diabetes to test their glucose levels without drawing blood.

The new chip makes use of a series of specific chemical reactions combined with plasmonic interferometry, a means of detecting chemical signature of compounds using light. The device is sensitive enough to detect differences in glucose concentrations that amount to just a few thousand molecules in the sampled volume.

"We have demonstrated the sensitivity needed to measure glucose concentrations typical in saliva, which are typically 100 times lower than in blood," said Domenico Pacifici, assistant professor of engineering at Brown, who led the research. "Now we are able to do this with extremely high specificity, which means that we can differentiate glucose from the background components of saliva."

The new research is described in the cover article of the June issue of the journal Nanophotonics.

The biochip is made from a one-inch-square piece of quartz coated with a thin layer of silver. Etched in the silver are thousands of nanoscale interferometers -- tiny slits with a groove on each side. The grooves measure 200 nanometers wide, and the slit is 100 nanometers wide -- about 1,000 times thinner than a human hair. When light is shined on the chip, the grooves cause a wave of free electrons in the silver -- a surface plasmon polariton -- to propagate toward the slit. Those waves interfere with light that passes through the slit. Sensitive detectors then measure the patterns of interference generated by the grooves and slits.

When a liquid is deposited on the chip, the light and the surface plasmon waves propagate through that liquid before they interfere with each other. That alters the interference patterns picked up by the detectors, depending on the chemical makeup of the liquid. By adjusting the distance between the grooves and the center slit, the interferometers can be calibrated to detect the signature of specific compounds or molecules, with high sensitivity in extremely small sample volumes.

In a paper published in 2012, the Brown team showed that interferometers on a biochip could be used to detect glucose in water. However, selectively detecting glucose in a complex solution like human saliva was another matter.

"Saliva is about 99 percent water, but it's the 1 percent that's not water that presents problems," Pacifici said. "There are enzymes, salts, and other components that may affect the response of the sensor. With this paper we solved the problem of specificity of our sensing scheme."

They did that by using dye chemistry to create a trackable marker for glucose. The researchers added microfluidic channels to the chip to introduce two enzymes that react with glucose in a very specific way. The first enzyme, glucose oxidase, reacts with glucose to form a molecule of hydrogen peroxide. This molecule then reacts with the second enzyme, horseradish peroxidase, to generate a molecule called resorufin, which can absorb and emit red light, thus coloring the solution. The researchers could then tune the interferometers to look for the red resorufin molecules.

"The reaction happens in a one-to-one fashion: A molecule of glucose generates one molecule of resorufin," Pacifici said. "So we can count the number of resorufin molecules in the solution, and infer the number of glucose molecules that were originally present in solution."

The team tested its combination of dye chemistry and plasmonic interferometry by looking for glucose in artificial saliva, a mixture of water, salts and enzymes that resembles the real human saliva. They found that they could detect resorufin in real time with great accuracy and specificity. They were able to detect changes in glucose concentration of 0.1 micromoles per liter -- 10 times the sensitivity that can be achieved by interferometers alone.

The next step in the work, Pacifici says, is to start testing the method in real human saliva. Ultimately, the researchers hope they can develop a small, self-contained device that could give diabetics a noninvasive way to monitor their glucose levels.

There are other potential applications as well.

"We are now calibrating this device for insulin," Pacifici said, "but in principle we could properly modify this 'plasmonic cuvette' sensor for detection of any molecule of interest."

It could be used to detect toxins in air or water or used in the lab to monitor chemical reactions as they occur at the sensor surface in real time, Pacifici said.