domingo, 25 de maio de 2014

Prescription Drug Misuse Among Seniors an "Emerging Epidemic"

 

Hundreds of thousands of seniors are misusing prescription drugs thanks to being overprescribed by a medicate-first approach by doctors.

Shutterstock

On Wednesday, USA Today published a report on the growing epidemic of seniors in the U.S. who are misusing prescription medications. USA Today studied data from an array of federal agencies and private firms and found that this trend is growing, and not without a slew of serious consequences.
The growing misuse of prescription drugs - especially opioid pain relievers and benzodiazepines, anxiety medications like Xanax and Valium - is fueled by the fact that seniors are a segment of the population that is prescribed more drugs than any other, as well as the medical community’s medicate-first approach to treating them for everything from joint pain to depression.

“There’s a growing group of seniors, they have pain, they have anxiety … and a lot of (doctors) have one thing in their tool box - a prescription pad,” said Mel Pohl, medical director at the Las Vegas Recovery Center, which treats seniors for pain and drug dependence. “The doctor wants to make their life better, so they start on the meds.” 
Hundreds of thousands of American seniors are misusing prescription drugs. In 2012, the average number of seniors misusing - defined as using without a prescription or not as prescribed - or dependent on prescription pain relievers in the past year rose to an estimated 336,000 up from 132,000 just a decade earlier. It doesn’t help that older brains and bodies are more vulnerable to drug complications. From 2007 to 2011, annual emergency room visits by people 65 and older for prescription drug misuse rose more than 50 percent to more than 94,000 a year. From 1999 to 2010, overdose deaths among those 55 and older, regardless of drug type, nearly tripled to 9.4 fatalities per 100,000 people.
Wilson Compton, a psychiatrist and deputy director of the National Institute on Drug Abuse dubbed prescription drug misuse by seniors an “emerging epidemic” and emphasized that both doctors and patients should be better educated on the risks of these drugs. It’s been demonstrated by multiple studies that opioids lose their effectiveness as patients build tolerance, and have little value as long-term medication for managing chronic pain. The same goes for anxiety medications such as Valium.

“Over time, patients build up a tolerance or suffer more pain and they ask for more medication,” Pohl said. “And without anyone necessarily realizing, it begins a downward spiral with horrible consequences.”

Six Ways Your Family Is At Risk From Addiction

 

How it affects your loved ones.

shutterstock

ADDICTION IS A PROCESS

Addiction must be viewed as a process that is progressive, and an illness - not a disease - which undergoes continuous development from a starting point to an ending point. According to Craig Nakken in his book, The Addictive Personality: Understanding the Addictive Process and Compulsive Behavior, “we must first understand what all addictions and addictive processes have in common: the out-of-control and aimless searching for wholeness, happiness, and peace through a relationship with an object or event. No matter what the addiction is, every addict engages in a relationship with an object or event in order to produce a desired mood change or state of intoxication. The crucial crux of the situation is that the addict will not recover unless he or she wants to recover regardless of any intervention!"

After spending many years on drugs, even young, otherwise healthy bodies fight back. The vibrations of an addict are of a very specific sort - they ricochet out of control, mostly out of reach. The energy called up by the drug quickly disperses, leaving a void, a nothingness. Nature abhors a vacuum, so negative forces rush in, take up residence. The only immediate relief is more narcotics. This is the vicious cycle of addiction for an addict.

DIFFERENT EFFECTS FOR DIFFERENT FAMILY STRUCTURES

In days past, when society spoke of “family,” it was typically referring to Mom, Dad and the kids, plus grandparents and an aunt or uncle. Family structures in America have become more complex - growing from the traditional nuclear family to single‐parent families, stepfamilies, foster families, and multigenerational families. Therefore, when a family member abuses substances, the effect on the family may differ according to family structure.

SMALL CHILDREN

A growing body of literature suggests that substance abuse has distinct effects on different family structures. For example, the parent of small children may attempt to compensate for deficiencies that his or her substance‐abusing spouse has developed as a result of drug abuse. Frequently, children act as surrogate spouses for the parent who abuses substances, according to S. Brown and Lewis V. in The Alcoholic Family in Recovery: A Developmental Model. In a single‐parent household, children are likely to behave in a manner that is not age‐appropriate to compensate for the parental deficiency.

Empirical studies have shown that a parent’s alcohol problem can have cognitive, behavioral, psychosocial, and emotional consequences for children. Among the lifelong problems documented are impaired learning capacity; a propensity to develop a substance use disorder; adjustment problems including increased rates of divorce, violence, and the need for control in relationships; and other mental disorders such as depression, anxiety, and low self‐esteem.

PARTNERS

The consequences of an adult who abuses substances and lives alone or with a partner are likely to be economic and psychological. Money may be spent for drug use; the partner who is not using substances often assumes the provider role. Psychological consequences may include denial or protection of the person with the substance abuse problem, chronic anger, stress, anxiety, hopelessness, inappropriate sexual behavior, neglected health, shame, stigma, and isolation.

PARENTS OF GROWN CHILDREN

Alternately, the aging parents of adults with substance use disorders may maintain inappropriately dependent relationships with their grown offspring, missing the necessary “launching phase” in their relationship, so vital to the maturational processes of all family members involved.

When an adult, age 65 or older, abuses a substance, it is most likely to be alcohol and/or prescription medication. The 2012 National Household Survey on Drug Abuse found that 12.5 percent of older adults reported binge drinking and 6.4 percent reported heavy drinking within the past month of the survey. Veteran’s hospital data indicate that, in many cases, older adults may be receiving excessive amounts of one class of addictive tranquilizer (benzodiazepines), even though they should receive lower doses.

Further, older adults take these drugs longer than other age groups. Older adults consume three times the number of prescription medicine as the general population, and this trend is expected to grow, as children of the Baby Boom (born 1946–1958) become senior citizens, according to “The epidemiology of alcohol use, problems, and dependence in elders: A review” by K.K. Bucholz, Y. Sheline., and J.E. Helzer.

STEP FAMILIES

Interestingly, many people who abuse substances belong to stepfamilies. Even under ordinary circumstances, stepfamilies present special challenges. Children often live in two households in which different boundaries and ambiguous roles can be confusing. Effective co-parenting requires good communication and careful attention to possible areas of conflict, not only between biological parents, but also with their new partners.

Experts believe that the difficulty of coordinating boundaries, roles, expectations, and the need for cooperation places children raised in blended households at far greater risk of social, emotional, and behavioral problems. Children from stepfamilies may develop substance abuse problems to cope with their confusion about family rules and boundaries.

Substance abuse can intensify problems and become an impediment to a stepfamily’s integration and stability. When substance abuse is part of the family, unique issues can arise. Such issues might include parental authority disputes, sexual or physical abuse, and self‐esteem problems for children.

Substance abuse by stepparents may further undermine their authority, lead to difficulty in forming bonds, and impair a family’s ability to address problems and sensitive issues. Clinicians treating substance abuse should know that the family dynamics of blended families differ somewhat from those of nuclear families and require some additional considerations.

EXTENDED FAMILY AND INTERGENERATIONAL EFFECTS

The effects of substance abuse frequently extend beyond the nuclear family. Extended family members may experience feelings of abandonment, anxiety, fear, anger, concern, embarrassment, or guilt; they may wish to ignore or cut ties with the person abusing substances. Some family members even may feel the need for legal protection from the person abusing substances.

Moreover, substance abuse can lead to inappropriate family subsystems and role taking and the effects on families may continue for generations. Intergenerational effects of substance abuse can have a negative impact on role modeling, trust, and concepts of normative behavior, which can damage the relationships between generations. For example, a child with a parent who abuses substances may grow up to be an overprotective and controlling parent who does not allow his or her children sufficient autonomy.

FRIENDS AND COMMUNITY

Neighbors, friends, and coworkers also experience the effects of substance abuse because drug abusers are often unreliable. Friends may be asked to help financially or in other ways. Coworkers may be forced to compensate for decreased productivity or carry a disproportionate share of the workload. Consequently, they may resent the person abusing substances, according to H.C. Fishman in Intensive Structural Therapy: Treating Families in Their Social Context.

In cultures with a community approach to family care, neighbors may step in to provide whatever care is needed. Sometimes it is a neighbor who brings a child abuse or neglect situation to the attention of child welfare officials. Most of the time, however, these situations go unreported and neglected.

Substance abusers are likely to find themselves increasingly isolated from their families. Often they prefer associating with others who abuse substances or participate in some other form of antisocial activity. These peers support and reinforce each other’s behavior.

Different treatment issues emerge based on the age and role of the person who uses substances in the family and on whether small children or adolescents are present. In some cases, a family might present a healthy face to the community while substance abuse issues lie just below the surface.

TREATMENT

In any form of family therapy for substance abuse treatment, consideration should be given to the range of social problems connected to substance abuse. Problems such as criminal activity, joblessness, domestic violence, and child abuse or neglect may also be present in families experiencing substance abuse. To address these issues, treatment providers need to collaborate with professionals in other fields. This is known as concurrent treatment.

Whenever family therapy and substance abuse treatment take place concurrently, communication between clinicians is vital. In addition to family therapy and substance abuse treatment, multifamily group therapy, individual therapy, and psychological consultation might be necessary.

With these different approaches, coordination, communication, collaboration, and exchange of the necessary releases of confidential information are required. With concurrent treatment, it is important that goal diffusion does not occur. Empowering the family is a benefit of family therapy that should not be sacrificed.

Pamela Wray is a writer and author based in Birmingham, Alabama. She has a blog.

United Nations Says Synthetic Drugs Reaching 'Unprecedented Expansion'

 

A huge wave of synthetic drugs has washed over some 90 countries world wide, as synthetics have supplanted cocaine and heroin as drugs of choice.

Shutterstock

A new report from the United Nations confirms that synthetic drugs have reached “unprecedented expansion,” with legal highs now being used more often than heroin and cocaine.

Although 350 new psychoactive substances and other legal highs were found in 90 countries by the end of last year, none of them are under any form of international control. Many of these substances, like the plant-based psychoactive drug known as khat, are being trafficked into other continents. The U.S. has also seen a huge increase in the number of dismantled meth labs being discovered.
However, the federal government is doing its best to try and stop the epidemic. Synthetic marijuana, commonly known as Spice or K2, was recently classified as a Schedule I drug. Hundreds of arrest warrants have also been issued across 25 states for other synthetic drugs including Molly and bath salts. There has also been an active effort towards stamping out meth trafficking, with the number of meth seizures rising from 10,000 in 2007 to 60,000 in 2012.
Despite these efforts, however, roughly five new drug compounds enter the market every month. And because many of these synthetic drugs are either sprayed with chemicals or contain unknown substances, officials have raised the alarm over a potentially major health crisis.

Earlier this month, 120 people were hospitalized in Dallas and Austin, Texas, over a batch of tainted synthetic marijuana. Several patients reportedly had to be restrained due to violent outbursts as a result of the drug. Last September, three deaths in Colorado were reportedly tied to synthetic marijuana, while four deaths were attributed to Molly that same month.

Parents’ Drug, Alcohol Use Has Huge Influence on Children

 

Parents who use drugs and alcohol as means of relaxing or celebrating can have a strong influence on children and adolescents.

Family time. Shutterstock

Informing children about the dangers of drug and alcohol use is a challenge that every parent must face, but recent studies have indicated that one of the best ways to impart that information to young people is to reduce one’s own daily intake.

By incorporating alcohol or drugs into social occasions or as part of one's daily relaxation can have a strong influence on children and adolescents. “If you had a tough day, talk about it, verbalize it. Take a hot shower. Turn on music and relax a little. Do not model alcohol use as a way to self-medicate,” said Stephen Wallace, director for the Center for Adolescent Research and Education at Susquehanna University.

Research conducted by scientists from Dartmouth College and Dartmouth Medical School in 2005 underscored the implications of parental cues about drug and alcohol use on young children. In the study, a group of 120 children, all ages two-to-six years old, participated in a game in which dolls and props were used to prepare a “night out” for adults. Each child was encouraged to select items from a miniature grocery story stocked with 73 different products, including beer, wine, and cigarettes.

Of the 17 items or less bought by the children, alcohol was among the items chosen by 61 percent of the participants. The study also found that children whose parents drank alcohol on a monthly basis were more likely to include alcohol among their imaginary purchases at the store.

Studies have shown that the average age that children first experiment with marijuana is 14, while alcohol use can begin before the age of 12. Faced with such statistics, Wallace notes that it is crucial for parents to “engage early and often in honest dialogue and express parental expectations [about substance use].” It also remains imperative that parents reflect the core beliefs that they want their children to adopt.

“If parents don’t want their kids to [drink or use drugs], they shouldn’t be setting the example,” noted Dr. Joseph DiFranza  of the University of Massachusetts Medical School.

Lab shows powerful, possible next step in electric motors

 

Dr. Chenjie Lin, a postdoctoral researcher, was among those who demonstrated the double-stator switched reluctance machine at the ARPA-E Energy Innovation Summit.

A team from the Renewable Energy and Vehicular Technology Laboratory (REVT) at UT Dallas was one of a few research groups selected for advanced participation in a Department of Energy conference aimed at presenting the next generation of energy technologies.

The DOE's Advanced Research Projects Agency-Energy (ARPA-E) program hosts an annual summit in Washington, D.C., for researchers, entrepreneurs, investors, corporate executives and government officials to share transformational research funded through the program.

Dr. Babak Fahimi, professor of electrical engineering in the Erik Jonsson School of Engineering and Computer Science and director of REVT, has received $2.8 million through an ARPA-E program aimed at reducing rare earth metals, which are used in motors of electric vehicles. The metals are expensive, difficult to find and are usually imported into the United States from countries such as China. In addition, the mining process for these metals releases significant amounts of pollution into the atmosphere.

Dr. Chenjie Lin, a postdoctoral researcher, was among those who demonstrated the double-stator switched reluctance machine at the ARPA-E Energy Innovation Summit.

REVT members demonstrated electric motors or generators that eliminate rare earth metals. Typical motors are powered through the electromagnetic interaction between a rotor, which contains rare earth metals and rotates, and another part known as a stator, which is stationary but houses electromagnetic sources. The REVT solution, called a double-stator switched reluctance machine (DSSRM), has two stators, one on either side of the rotor, that cause an electromagnetic reaction that produces power. This approach produces significantly greater power and torque at a given size and weight than traditional motor technologies without the use of permanent magnets.

"The transformative nature of our motor technology stems from a novel magnetic configuration, which significantly reduces the radial forces while increasing the motional forces by a factor of three," Fahimi said. "This technology also benefits from high levels of fault tolerance, low-cost manufacturing and low acoustic noise. I strongly believe this technology is highly appealing to automotive, oil and gas, and renewable energy industries."

Besides delivering more power and torque than competing technologies, this machine could be manufactured entirely in the United States, which would eliminate the pollution from mining rare earth metals, while also significantly reducing the amount of air pollution released through electric vehicle emissions. Other applications of this technology are airplanes, fans, pumps, wind generators and robots.

The research, first funded in 2012, has one patent pending. At the conference earlier this year, REVT members demonstrated the technology to potential commercial licensees.

Team members who demonstrated the technology included Pete Poorman, assistant director of corporate relations, and Drs. Wei Wang and Chenjie Lin, postdoctoral researchers in the lab.

"Having the opportunity to present at the ARPA-E Energy Innovation Summit was a huge opportunity to further our work," Poorman said. "Being one of the few projects selected for the round-table discussion and congressional reception is both an honor and an acknowledgement of the excellent work being done in the REVT lab."


Story Source:

The above story is based on materials provided by University of Texas at Dallas. Note: Materials may be edited for content and length.

Making better medicines with handful of chemical building blocks

 

University of Illinois chemist Martin Burke, a pioneer of a technique that constructs complex molecules from simple chemical "building blocks," led a group that found that thousands of compounds in a class of molecules called polyenes -- many of which have great potential as drugs -- can be built simply and economically from a scant one dozen different building blocks.

The researchers published their findings in the journal Nature Chemistry.

"We want to understand how these molecules work, and synthesis is a very powerful engine to drive experiments that enable understanding," said Burke, a chemistry professor at the U. of I. and the Howard Hughes Medical Institute. "We think this is a really powerful road map for getting there.

Once you have the pieces in a bottle, you can make naturally occurring molecules, or you can change the pieces slightly to make them better. Usually, that's such a herculean task that it slows down research. But if that part becomes on-demand, you can make anything you want, and it can powerfully accelerate the drug discovery process."

In the same way that plastic building blocks of different sizes and shapes can snap together because they share a simple connector, the chemical building blocks are linked together with one simple reaction. This gives scientists freedom to build molecules that may be difficult or expensive to extract from their natural source or to make in a lab.

One advantage of the building-block approach is that it allows the researchers to mix and match parts to build many different molecules, and to omit or substitute parts to make a potentially therapeutic substance better for human health. For example, Burke's group recently synthesized a derivative of the anti-fungal medication amphotericin (pronounced AM-foe-TAIR-uh-sin), which led to a big breakthrough in understanding how this clinically vital but highly toxic medicine works and the discovery of another derivative that is nontoxic to human cells while still effective at killing fungus.

After their success in synthesizing derivatives of amphotericin, which fall into the polyene category, the researchers wondered, how many different building blocks would it take to make all the polyenes? (Polyene is pronounced polly-een.)

Looking at the structures of all the known naturally occurring polyenes -- thousands in all -- Burke and graduate students Eric Woerly and Jahnabi Roy focused on the smaller pieces that made up the molecules and found that many elements were common across numerous compounds. After careful analysis, they calculated that more than three-quarters of all natural polyene frameworks could be made with only 12 different blocks.

"That is the key most surprising result," Burke said. "We've had this gut instinct that there will be a set number of building blocks from which most natural products can be made. We're convinced, based on this result, that we can put together a platform that would enable on-demand assembly of complex small molecules. Then researchers can focus on exploring the function of these molecules, rather than spending all their time and energy trying to make them."

To demonstrate this surprising finding, the researchers synthesized several compounds representing a wide variety of polyene molecules using only the dozen designated building blocks. Many of these building blocks are available commercially thanks to a partnership between Burke's group and Sigma-Aldrich, a chemical company.

Burke hopes that identifying the required building blocks and making them widely available will speed understanding of polyene natural products and their potential as pharmaceuticals, particularly compounds that until now have been left unexplored because they were too costly or time-consuming to make.

Burke's group hopes eventually to identify and manufacture a set of building blocks from which any researcher -- trained chemist or not -- can build any small molecule.

"Now that we have this quantifiable result, that with only 12 building blocks we can make more than 75 percent of polyenes, we are committed to figuring out a global collection of building blocks -- how to make them, how to put them together -- to create a generalized approach for small-molecule synthesis."

12 benefícios do vinagre de maçã

 

O uso do vinagre de maçã vai muito além de temperar aquela saladinha

As qualidades do vinagre vão além do uso na cozinha. O líquido, que é feito a partir da fermentação alcoólica de um carboidrato seguida da fermentação secundária por ácido acético, também é conhecido pelas suas qualidades medicinais há milhares de anos. O estudioso grego Hipócrates, que nasceu em 460 a.C., já exaltava as qualidades da substância em seus escritos.

Ao longo dos anos, o vinagre tem sido feito por meio da fermentação de uma variedade de materiais, como melado, frutas, mel, batata, beterraba, malte, entre outros. Mas, entre tantos tipos, o feito de maçã se destaca por suas propriedades. O Vinager Institute é um dos responsáveis por promover a versatilidade do produto. É possível até listar 12 de seus benefícios para você fazer uso do vinagre não apenas na saladinha.

Os usos a seguir são reportados pela ciência e pela sabedoria popular. Confira:

1. Refresca o hálito

Alguns juram que fazendo bochecho de vinagre de maçã com água, é possível reduzir o mau hálito. Será?

2. Trata refluxo gástrico

Muitas pessoas acreditam que esse problema é causado por uma superprodução de ácido, mas é totalmente o contrário. O corpo não produz ácido suficiente. O tratamento consiste em tomar de uma a duas colheres de chá  de vinagre de maçã diariamente.

3. Alivia incômodo gastro-intestinal

Se você sofre de diarreia por infecção causada por bactéria, o líquido pode ajudar devido a suas propriedades antibióticas. Alguns especialistas também sugerem que a pectina encontrada no vinagre de maçã pode ajudar a acalmar espasmos intestinais. Basta misturar de uma a duas colheres de sopa do líquido em água ou suco de maçã e beber.

4. Previne indigestão

Se você está planejando “se acabar” em uma refeição que pode não cair muito bem, tente beber uma colher de chá de vinagre de maçã, misturada a uma colher de chá de mel em um copo de água morna. Beba 30 minutos antes da refeição.

5. Acaba com soluços

Há muitos “remédios” para “parar” soluços, segundo a sabedoria popular. Tente este da próxima vez em que tiver o incômodo: uma colher de chá de vinagre de maçã garganta abaixo!

6. Cuida do cabelo

Para os cabelos oleosos, o vinagre tem função de limpeza: enxágue o cabelo com uma mistura de duas colheres de sopa de vinagre em um copo de água.

7. Alivia dor de garganta

Logo que você sentir os primeiros sinais de dor de garganta, tente um dos dois métodos: misture ¼ de vinagre de maçã com água morna e gargareje de hora em hora; ou misture uma colher de sopa do líquido com água quente e um pouco de mel e beba.

8. Combate a coceira

Um pouquinho de vinagre aplicado com algodão é comumente recomendado para fazer com que uma mordida de mosquito pare de coçar. Vai arder, mas vai ajudar a acalmar a irritação.

9. Minimiza problemas na pele

Muitos que sofrem de problemas dermatológicos adoram vinagre de maçã por reduzir inflamação. Tente dar uma pincelada de vinagre nas áreas afetadas fazendo uso de algodão.

10. Diminui os níveis de glicose

O vinagre de maçã tem também propriedades para combater a diabetes. Muitos estudos têm mostrado seu efeito positivo nos níveis de açúcar do sangue. Um estudo de 2007 com participantes com diabetes tipo 2 descobriu que duas colheres de sopa do vinagre antes de ir para a cama abaixaram a glicose pela manhã de 4% a 6%.

11. Abaixa o colesterol e a pressão alta

Vários estudos com ratos têm mostrado os benefícios do vinagre para abaixar o colesterol e a pressão alta. Não há evidências de que pode funcionar nas pessoas, mas pesquisas continuam nessa área.

12. Auxilia na perda de peso

Há muito tempo, o vinagre tem sido usado parar ajudar na perda de peso. Embora esse efeito não tenha sido cientificamente comprovado, muitas pessoas insistem nessa prática. Um  pequeno estudo, em 2005, evidenciou que pessoas que comeram um pedaço de pão com pequena quantidade de vinagre se sentiram mais saciadas do que aquelas que não fizeram uso do vinagre. Ainda assim, sabemos que (infelizmente) não existe fórmula mágica para perder peso!

Fonte: Mother Nature Network

Science of school lunch: Pictures tell story about lunch policies, healthy consumption

 


You can mandate fruits and vegetables, but you can’t make kids eat. With new methodology developed at UVM, researchers can measure the meal kids actually consume.

In terms of ambience, Charlotte Central's cafeteria is -- well, conjure up your own elementary school lunch experience. There's more than one reason to run to recess. But on a recent visit to observe a group of researchers from UVM's Johnson Lab, the lunch ladies were serving up something more likely to be found on a restaurant menu: risotto with mushrooms and peas. It's the result of a host of programs by schools around Vermont to offer more tempting choices -- with locally sourced ingredients when possible, including herbs and vegetables from the playground garden -- and to get children to eat more healthfully. But is it working?

That's what Rachel Johnson, Robert L. Bickford Jr. Green and Gold Professor of Nutrition and Food Sciences, along with her research team, is trying to find out. And they aren't alone in their concern. Since Fall 2012, USDA regulations require students across the country to take a fruit or vegetable with their lunch, a good intention that might easily go to the garbage.

To get answers about what actually happens to those dressed up peas and mushrooms -- or the obligatory apple next to the mac and cheese -- the Johnson Lab has developed state-of-the-art digital imaging to measure consumption, a method just validated by a paper published in the Journal of the Academy of Nutrition and Dietetics.

The researchers, Johnson's "army of undergraduates," image children's trays when they leave the line and then again when they're finished eating. They've already weighed and photographed a correct portion of each fruit and vegetable item offered, as well as analyzed recipes to determine how much fruit and vegetable a serving contains.

Back at the lab, visually comparing the composite before and after photos alongside the comparison data, researchers can accurately determine consumption within two grams, a statistically valid but much less labor-intensive means of assessing dietary intake compared to the current gold standard of individually weighing portions selected before a child can eat against plate waste. The time saved allows for a much larger sample size.

"Now we're exploring ways to employ the method," says Johnson's doctoral student Sarah Amin. In Charlotte she's leading a new study to evaluate whether non-researchers -- parents, teachers, community volunteers -- can be trained to collect the data with equally valid results. The process involves taking the tray image at an accurate angle for later analysis, while also capturing the number on lanyards that participating children wear to track the trays.

If the research is successful, explains Amin, it has broad implications for collecting data on a larger scale, including Web-based training that could be done anywhere. "There are amazing interventions to get kids to eat more fruits and vegetables -- farm-to-school programs, school gardens -- and we know that exposure to foods helps develop preference for those foods -- so we're interested in accurately measuring what they're eating," says Johnson. "I would love for our lab to become the go-to place in the country to help evaluate the efficacy of these interventions."

Not too cool

\In another study from the Johnson Lab, led by Amin and published this spring in the Journal of Child Nutrition and Management, researchers found that the fruits and vegetables that students are choosing at lunch are largely processed versions, primarily 100 percent fruit juice and high calorie entrees such as pizza and lasagna, with the tomato in the sauce qualifying as a vegetable.

But Johnson is optimistic that over time, with the introduction of appealing whole fruits and vegetables into familiar favorites, for instance, that kids will come around. Research associate Bethany Yon, in a study recently published in the Journal of School Health, in fact, has shown that it's worked with flavored milk. When the dairy industry, in advance of impending regulations, started to reformulate flavored milk, traditionally high in both fat and sugar content, they did so incrementally, by reducing either fat or sugar to lower calories. Yon's work, using shipment data as a proxy for consumption, has shown that, after some small blips, milk consumption has stayed consistent. "It was nice to see," she says, "that small, subtle changes go unnoticed overall by students."

That milk, as Johnson has known since her earliest days as a researcher, is critical. As people first became concerned about childhood obesity, Johnson started looking at beverage consumption and how that impacted the overall quality of a child's diet. Between 1940 and the 1990s, she says, the curve makes a big X with soft drink consumption going up and milk consumption going down. "We were one of the first to sound this alarm," she says, "showing that when kids don't have milk at lunch they don't come close to meeting their dietary needs -- and the beverages displacing milk add empty calories."

Today, with 101-peer-reviewed papers in scientific journals, 12 book chapters and funded grants and contracts totaling nearly $3.5 million, Johnson puts both her authoritative research credentials and her talent for translating the science into action, serving, among other posts, as spokesperson for the American Heart Association, whose nutrition committee she chairs. But she remains particularly concerned with the health of children. Most recently, Johnson worked with NBC News to develop the nutrition content for the network's new website Parent Toolkit, which recently won a 2014 Webby award.

And she sees, at least in places that have been aggressive about making changes, a hopeful trend in terms of reducing childhood obesity. Good policy, Johnson believes, will herald the shift. "We've worked on education policy changes, physical activity standards, new regulations in schools about limiting food marketing to kids, about using food for fundraisers," she says. "There are a lot of new Federal regulations and policies being put in place.

"I think we're going to see a new world in the next ten to fifteen years that's going to blow us away when we look back," Johnson says. "At my kids' high school there were banks of vending machines with soft drinks and candy and snack foods, all the bake sales. It was just crazy. It's going to seem like the days when people smoked in their offices when we look back. It's just not cool."


Story Source:

The above story is based on materials provided by University of Vermont. The original article was written by Lee Ann Cox. Note: Materials may be edited for content and length.


Journal Reference:

  1. Jennifer C. Taylor, Bethany A. Yon, Rachel K. Johnson. Reliability and Validity of Digital Imaging as a Measure of Schoolchildren's Fruit and Vegetable Consumption. Journal of the Academy of Nutrition and Dietetics, 2014; DOI: 10.1016/j.jand.2014.02.029

New drug for non-Hodgkin lymphoma, chronic lymphocytic leukemia passes early test

 

May 23 / 2014

Houston Methodist

A new chemotherapy drug being investigated for its potency against two types of cancer was found by scientists to be effective in about one-third of the 58 patients who participated in a phase I study. The drug, alisertib or MLN8237, inhibits the enzyme aurora A kinase, which is known to be very active during cell division. The present study looks at the safety, tolerability, and preliminary success of alisertib in treating non-Hodgkin lymphoma (NHL) and chronic lymphocytic leukemia (CLL).


A new chemotherapy drug being investigated for its potency against two types of cancer was found by scientists at Houston Methodist and seven other institutions to be effective in about one-third of the 58 patients who participated in a phase I study.

The drug, alisertib or MLN8237, inhibits the enzyme aurora A kinase, which is known to be very active during cell division. The present study, published in the journal Investigational New Drugs, looks at the safety, tolerability, and preliminary success of alisertib in treating non-Hodgkin lymphoma (NHL) and chronic lymphocytic leukemia (CLL).

"An advantage with this drug is it is oral and very effective in a significant number of patients with aggressive lymphoma when used at that dose for 7 days out a 21 day cycle," said hematologist Swaminathan Iyer, M.D., who led the multi-site study.

Drugs commonly used to treat NHL and CLL are chemotherapeutic drugs and some biological targeted agents such as the monoclonal antibodies rituximab, ofatumumab and obinutuzumab with varying degrees of success.

Although about 1/2 of patients participating in the phase I study experienced side effects most of which were manageable events, Iyer said that is not unusual for such biologic (non chemotherapy) drugs.

"The side effects were fairly tolerable in this study," Iyer said. "We would like to see more information from a larger group of patients to fully understand the drug's safety and tolerability for those experiencing the middle-to-later stages of these diseases."

Iyer and his group recommend 50 mg, twice-daily doses of alisertib for the advanced phase trials of the drug, which Iyer says has begun enrollment.

Alisertib is not yet approved for general medical use by the FDA. Its impact on T cell lymphoma is being investigated in a separate, phase III trial for a specific type of lymphoma called the T cell lymphomas. Houston Methodist is a participating study site for that project. Initial phase II reports in these T cell lymphomas showed a 57% response, the highest ever noted for any single agent in this disease entity.


Story Source:

The above story is based on materials provided by Houston Methodist. Note: Materials may be edited for content and length.


Journal Reference:

  1. Kevin R. Kelly, Thomas C. Shea, André Goy, Jesus G. Berdeja, Craig B. Reeder, Kevin T. McDonagh, Xiaofei Zhou, Hadi Danaee, Hua Liu, Jeffrey A. Ecsedy, Huifeng Niu, Ely Benaim, Swaminathan Padmanabhan Iyer. Phase I study of MLN8237—investigational Aurora A kinase inhibitor—in relapsed/refractory multiple myeloma, Non-Hodgkin lymphoma and chronic lymphocytic leukemia. Investigational New Drugs, 2013; 32 (3): 489 DOI: 10.1007/s10637-013-0050-9

Personal judgments swayed by group opinion, but only for three days

 


We all want to feel like we're free-thinking individuals, but there's nothing like the power of social pressure to sway an opinion. New research suggests that people do change their own personal judgments so that they fall in line with the group norm, but the change only seems to last about 3 days. The research is published in Psychological Science, a journal of the Association for Psychological Science.

"Our findings suggest that exposure to others' opinions does indeed change our own private opinions -- but it doesn't change them forever," says psychological scientist and study author Rongjun Yu of South China Normal University. "Just like working memory can hold about 7 items and a drug can be effective for certain amount of time, social influence seems to have a limited time window for effectiveness."

The fact that personal judgments are swayed by the opinions' of others is a well-established phenomenon in psychology research.

But it's unclear whether oft-observed social conformity reflects public compliance, motivated by a desire to fit in with the group and avoid social rejection, or private acceptance, which leads to a genuine change in personal opinion that persists even when social influence is removed.

Yu and colleagues Yi Huang and Keith Kendrick decided to investigate this question in the lab. They recruited Chinese college students to participate in a study exploring how "people perceive facial attractiveness." The students looked at 280 digital photographs of young adult Chinese women and were asked to rate the attractiveness of each face on an 8-point scale.

After rating a face, they saw the purported average of 200 other students' ratings for that face. Importantly, the group average matched the participant's rating only 25% of the time. The rest of the time, the group average fell 1, 2, or 3 points above or below the participant's rating.

The students were brought back to the lab to rate the faces again after either 1 day, 3 days, 7 days, or 3 months has passed.

The data showed that the group norm seemed to sway participant's own judgments when they re-rated the photos 1 and 3 days after the initial session.

There was, however, no evidence for a social-conformity effect when the intervening period was longer (either 7 days or 3 months after the first session).

According to the researchers, the fact that participants' opinions were swayed for up to 3 days suggests more than a superficial lab-based effect -- rather, group norms seem to have had a genuine, albeit brief, impact on participants' privately held opinions.

These studies are notable, says Yu, because they were able to control for methodological issues that often arise in studies that use a test-retest format, such as the natural human tendencies to regress to the mean and to behave consistently over time.

The one question that Yu and colleagues still don't know the answer to is why the effect lasts for 3 days. They plan on investigating whether there might be a neurological reason for the duration of the effect, and whether the effect can be manipulated to last for shorter or longer durations.


Story Source:

The above story is based on materials provided by Association for Psychological Science. Note: Materials may be edited for content and length.


Journal Reference:

  1. Y. Huang, K. M. Kendrick, R. Yu. Conformity to the Opinions of Other People Lasts for No More Than 3 Days. Psychological Science, 2014; DOI: 10.1177/0956797614532104

How Alzheimer's blood test could be first step in developing treatments to halt or slow disease

 

May 23 / 2014

American Association for Clinical Chemistry (AACC)

A blood test has the potential to predict Alzheimer’s disease before patients start showing symptoms, researchers have reported. Now they expand upon this groundbreaking research and discuss why it could be the key to curing this devastating illness. "This discovery is a potentially enormous breakthrough in the fight against Alzheimer's," said an expert. "If research aimed at a cure for Alzheimer's is to move forward, it is crucial that Alzheimer's clinical trials find a way to recruit patients who are still asymptomatic, since they are the ones most likely to respond to treatment."


In March of this year, a team of Georgetown University scientists published research showing that, for the first time ever, a blood test has the potential to predict Alzheimer's disease before patients start showing symptoms. AACC is pleased to announce that a late-breaking session at the 2014 AACC Annual Meeting & Clinical Lab Expo in Chicago will expand upon this groundbreaking research and discuss why it could be the key to curing this devastating illness.

According to the World Health Organization, the number of Alzheimer's patients worldwide is expected to skyrocket from the 35.6 million individuals who lived with it in 2010 to 115.4 million by 2050. Currently, however, all efforts to cure or effectively treat the disease have failed. Experts believe one explanation for this lack of success could be that the window of opportunity for treating Alzheimer's has already closed by the time its symptoms manifest.

Enter the research team led by Howard Federoff, MD, PhD, executive dean at Georgetown University School of Medicine in Washington, D.C. In cognitively healthy adults age 70 and older, Federoff's team measured the levels of 10 lipids found in the blood to identify, with 90% accuracy, which study group participants would develop cognitive impairment over a 2-3 year period. If this 10-lipid test is validated in larger studies, it could help researchers to develop treatments for Alzheimer's that halt or slow the disease before it even begins. A blood test would also be easier to perform than current Alzheimer's tests that use brain imaging or hard-to-collect cerebrospinal fluid, meaning that the Federoff team's test could be used for population-wide Alzheimer's screening.

Amrita Cheema, PhD, one of the main investigators on Federoff's team, will give an in-depth lecture on the test's significance, the science behind it, and the research techniques used to develop it in the July 28 AACC session, "Lipidomics: A Powerful Approach to Identify Pre-clinical Memory Impairment in Older Adults." Cheema is an associate professor and co-director of the Proteomics and Metabolomics Shared Resource at Georgetown University.

"This discovery is a potentially enormous breakthrough in the fight against Alzheimer's," said AACC CEO Janet B. Kreizman. "If research aimed at a cure for Alzheimer's is to move forward, it is crucial that Alzheimer's clinical trials find a way to recruit patients who are still asymptomatic, since they are the ones most likely to respond to treatment. The Federoff team's test could be the answer to this problem, and it also demonstrates how laboratory medicine helps patients achieve better health -- by not only ensuring that patients receive timely and appropriate treatment, but also by enabling researchers to develop effective treatments in the first place."


Story Source:

The above story is based on materials provided by American Association for Clinical Chemistry (AACC). Note: Materials may be edited for content and length.

Active genes in neurons profiled based on connections

 


The team tested their profiling technique on midbrain mouse neurons (blue) that use the neurotransmitter dopamine to send signals to a brain region known as the nucleus accumbens. To do so, they tagged protein-assembling ribosomes (red) using a small antibody that bound a fluorescent protein (green).

When it comes to the brain, wiring isn't everything. Although neurobiologists often talk in electrical metaphors, the reality is that the brain is not nearly as simple as a series of wires and circuits. Unlike their copper counterparts, neurons can behave differently depending on the situation.

Researchers in Jeffrey Friedman's Laboratory of Molecular Genetics have devised a way to create snapshots of gene expression in neurons based on their connections. These snapshots contain exhaustive lists of the active genes within neurons that send information to a synapse, the junction between neurons.

Their new technique, called Retro-TRAP, merges two approaches to understanding the brain: mapping all of its connections and profiling gene expression within populations of neurons, Friedman says. "We hope that Retro-TRAP will be broadly used and provide a more granular understanding of how complex neural circuits function and ultimately lead to better treatments for neurological and neuropsychiatric disorders."

"Refinements in neuroscience over time have allowed us to explore how the nervous system works in ever-greater detail, and the approach we have developed continues this trend," says Mats Ekstrand, a research associate in the laboratory. "By building on existing techniques, we are now able to take a closer look at the types of cells involved in a particular circuit and what they are doing."

In the long run, these sorts of insights might help explain why some diseases, such as Parkinson's Disease, afflict particular sets of neurons, or someday make it possible to precisely target treatments at a dysfunctional neural circuit, rather than bathing the entire brain in drug.

The researchers modified a technique known as translating ribosome affinity purification (TRAP), developed at Rockefeller by Nathaniel Heintz, Paul Greengard and others to identify gene expression using green fluorescent protein to tag protein-assembling machines called ribosomes.

In research published in Cell, Ekstrand, graduate student Alexander Nectow and colleagues describe how they used Retro-TRAP to introduce green fluorescent protein to the neuron via a virus that travels backwards from a synapse into the body of a mouse neuron. The researchers used a small antibody to link the ribosome with the fluorescent protein. Then, using these fluorescent tags, the researchers pulled out the ribosomes and sequenced the genetic messages passing through them. In this way, they produced a list of active genes.

To test their technique, the team focused on inputs to a well-studied part of the brain, the nucleus accumbens, which integrates information from throughout the brain, including regions involved in executive function, memory, depression, reward-related behavior, feeding and other functions, Nectow says.

"We wanted to target a selected number of inputs into the nucleus accumbens because we figured we might be able to get some molecular clues as to why it is important in regulating so many functions," Nectow says.

Using Retro-TRAP, they created molecular profiles of neurons extending from the hypothalamus and ventral midbrain that project to the nucleus accumbens. The results confirmed that Retro-TRAP works.

"The nucleus accumbens receives a lot of signals from the ventral midbrain via the neurotransmitter dopamine, and, as expected, the genes we sequenced included many associated with dopamine neurons," Nectow says.

Their data also contained some new discoveries. For instance, they found some neurons in the lateral hypothalamus express the p11 gene implicated in depression. After some further work, they found these neurons also tended to express a protein called orexin, a regulator of sleep and feeding -- suggesting a molecular association between depression and some of its symptoms.

Retro-TRAP merges two approaches to understanding the brain: mapping all of the connections within it and profiling gene expression within populations of neurons, Friedman says. "We hope that Retro-TRAP will be broadly used and provide a more granular understanding of how complex neural circuits function and ultimately lead to better treatments for neurological and neuropsychiatric disorders."


Story Source:

The above story is based on materials provided by Rockefeller University. Note: Materials may be edited for content and length.


Journal Reference:

  1. Mats I. Ekstrand, Alexander R. Nectow, Zachary A. Knight, Kaamashri N. Latcha, Lisa E. Pomeranz, Jeffrey M. Friedman. Molecular Profiling of Neurons Based on Connectivity. Cell, 2014; 157 (5): 1230 DOI: 10.1016/j.cell.2014.03.059

New glasses may increase risk of falls in older adults, suggests review

 

May 23 / 2014

Wolters Kluwer Health: Lippincott Williams & Wilkins

Blurred vision contributes to the risk of falling in older adults -— but getting new glasses with a big change in vision prescription may increase the risk rather than decreasing it, according to a new article. Unaccustomed magnification may cause objects to appear closer or farther than they really are, thus affecting the reflexes linking the vestibular (balance) system with eye movements. For older patients who aren't used to bifocals and "progressive" lenses, switching to these types of lenses may cause distortion in peripheral vision.


Blurred vision contributes to the risk of falling in older adults -- but getting new glasses with a big change in vision prescription may increase the risk rather than decreasing it, according to a special article, '2013 Fry Lecture: Blurred Vision, Spectacle Corr & Falls in Older Adults' in the June issue of Optometry and Vision Science, official journal of the American Academy of Optometry.The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.

Optometrists can help to prevent falls by avoiding over-aggressive vision correction in older patients at risk, according to the review by David B. Elliott, PhD, 2013 recipient of the prestigious Glenn A. Fry Lecture Award. "Our 2013 Glenn A. Fry Award winner has been studying the effects of blurred vision and vision correction on falls among the elderly," comments Anthony Adams, OD, PhD, Editor-in-Chief of Optometry and Vision Science. "In his Award-winning lecture, he provides some very special insights into how this may happen and how we as a profession may help to minimize falls related to vision loss."

Vision Correction May Actually Increase Risk of Falling

Falls are the major cause of accidental death and nonfatal injuries in elderly US adults. At least one-third of healthy adults aged 65 or older fall at least once a year. For those aged 90 or older, the risk increases to about 60 percent.

But falls in older adults aren't accidents, according to Dr. Elliott. Most of the time, they're related to a wide range of risk factors including older age, disabilities, muscle weakness, and many different medical conditions. "The more risk factors you have, the more likely you are to fall," Dr. Elliott writes.

Reduced vision is one important risk factor, suggesting that interventions to correct vision -- particularly glasses and cataract surgery -- would reduce the risk of falling. Surprisingly, however, most studies have shown little or no reduction in falls among older adults receiving a new vision correction.

Magnification from some new glasses provided in one study may contribute to the increase in risk, Dr. Elliott suggests. "Some of the subjects received large changes in spectacle prescription….Older frail people may have greater difficulty adapting to such changes and be at increased risk of falling during this adaptation period."

New Glasses for Older Patients at Risk of Falls -- 'If It Ain't Broke, Don't Fix It'

Unaccustomed magnification may cause objects to appear closer or farther than they really are, thus affecting the reflexes linking the vestibular (balance) system with eye movements. For older patients who aren't used to bifocals and "progressive" lenses -- with different areas of correction for near and distance vision -- switching to these types of lenses may cause distortion in peripheral vision.

So if maximizing vision correction isn't the answer, what can optometrists do to help prevent the risk of falls in elderly patients? An important first step is to assess risk factors, including history of falls, medical conditions, and medications used.

In addition, Dr. Elliott proposes taking a "conservative" approach to prescribing new glasses for older adults with a history of falls or risk factors for falling. He suggests some changes to the vision prescription for optometrists to consider in this situation. He adds, "Indeed, if a patient reports no problems with vision but simply requests a new frame, 'If it ain't broke don't fix it' is an appropriate clinical maxim."

He also suggests keeping the same type of lens (bifocals, progressive lenses, etc) unless there's a significant reason for change. "Progressive lenses or bifocals should never be prescribed to patients who are used to wearing single-vision glasses and who could be characterized at risk for falls," Dr. Elliott writes. He notes that one randomized controlled trial showed that providing an additional pair of distance vision, single-vision glasses for outdoor mobility use -- as opposed to bifocal or progressive addition lenses -- can reduce falls rate.


Story Source:

The above story is based on materials provided by Wolters Kluwer Health: Lippincott Williams & Wilkins. Note: Materials may be edited for content and length.


Journal Reference:

  1. David B. Elliott. The Glenn A. Fry Award Lecture 2013. Optometry and Vision Science, 2014; 91 (6): 593 DOI: 10.1097/OPX.0000000000000268

Disaster Planning: Risk assessment vital to development of mitigation plans

 


Wildfires and flooding affect many more people in the USA than earthquakes and landslide and yet the dread, the perceived risk, of the latter two is much greater than for those hazards that are more frequent and cause greater loss of life. Research published in the International Journal of Risk Assessment and Management, suggests that a new paradigm for risk assessment is needed so that mitigation plans in the face of natural disasters can be framed appropriately by policy makers and those in the emergency services.

Maura Knutson (nee Hurley) and Ross Corotis of the University of Colorado, Boulder, explain that earlier efforts for incorporating a sociological perspective and human risk perception into hazard-mitigation plans, commonly used equivalent dollar losses from natural hazard events as the statistic by which to make decisions. Unfortunately, this fails to take into consideration how people view natural hazards, the team reports. Moreover, this can lead to a lack of public support and compliance with emergency plans when disaster strikes and lead to worse outcomes in all senses.

The researchers have therefore developed a framework that combines the usual factors for risk assessment, injuries, deaths and economic and collateral loss with the human perception of the risks associated with natural disasters. The framework includes risk perception by graphing natural hazards against "dread" and "familiarity." These two variables are well known to social psychologists as explaining the greatest variability in an individual's perception of risk, whether considering earthquakes, landslides, wildfires, storms, tornadoes, hurricanes, flooding, avalanche, even volcanic activity. "Understanding how the public perceives the risk for various natural hazards can assist decision makers in developing and communicating policy decisions," the team says.

The higher the perceived risk of a natural disaster, the more people want to see that risk reduced and that means seeing their tax dollars spent on mitigation and preparation. For example, far more money is spent on reducing earthquake risk than on reducing the risk from wildfires, perhaps because the perceived risk is much greater, even though both will cause significant losses of life and property. The team's new framework for risk assessment will act as an aid in decision making for these types of situations as well as perhaps even offering a way to give members of the public a clearer understanding of actual risk rather than perceived risk.


Story Source:

The above story is based on materials provided by Inderscience Publishers. Note: Materials may be edited for content and length.


Journal Reference:

  1. Hurley, M.A. and Corotis, R.B. Perception of risk of natural hazards: a hazard mitigation plan framework. International Journal of Risk Assessment and Management, May 2014

Eumelanin's secrets: Discovery of melanin structure may lead to better sun protection

 

May 22 / 2014

Massachusetts Institute of Technology

Melanin -- and specifically, the form called eumelanin -- is the primary pigment that gives humans the coloring of their skin, hair, and eyes. It protects the body from the hazards of ultraviolet and other radiation that can damage cells and lead to skin cancer, but the exact reason why the compound is so effective at blocking such a broad spectrum of sunlight has remained something of a mystery. Now researchers at MIT and other institutions have solved that mystery, potentially opening the way for the development of synthetic materials that could have similar light-blocking properties.


A snapshot from a molecular dynamics simulation shows the geometric order and disorder characteristics of eumelanin aggregate structures. The different variations of the eumelanin molecules are shown in different colors for clarity.

Melanin -- and specifically, the form called eumelanin -- is the primary pigment that gives humans the coloring of their skin, hair, and eyes. It protects the body from the hazards of ultraviolet and other radiation that can damage cells and lead to skin cancer, but the exact reason why the compound is so effective at blocking such a broad spectrum of sunlight has remained something of a mystery.

Now researchers at MIT and other institutions have solved that mystery, potentially opening the way for the development of synthetic materials that could have similar light-blocking properties. The findings are published in the journal Nature Communications by graduate students Chun-Teh Chen and Chern Chuang, professor of civil and environmental engineering Markus Buehler, and three others.

Although eumelanin has been known for decades, pinning down its molecular structure, and identifying the reasons for its broadband light absorption, have been daunting tasks. This is, in part, because of the very characteristics that make it so interesting: Typically, the constituents of a chemical compound can be determined through spectroscopy, among other tools, but in the case of eumelanin the spectrographs don't show the sharp peaks that are ordinarily useful in identification. So indirect means of analysis were needed.

The team used a combination of computation and experimental analysis to derive the structure of the material, finding that a major source of the broadband absorption was the physical arrangement of the constituents, not their chemical characteristics. Specifically, the combination of disorder and order in the physical arrangement produces a "smearing" of the material's spectral absorption, and providing its crucial broadband blocking ability.

"You can't do traditional analytical chemistry on this particular system," Chuang, a graduate student in chemistry, says, "where you isolate each component. Only indirect ways of probing" can be used, he says.

The disorder that turned out to be key, the team says -- a physical disorder called "geometric disorder" -- is different from the chemical disorder that other researchers have studied. It turns out that both kinds of disorder may play a complementary role in producing eumelanin's broadband absorption.

The material forms tiny crystals -- a chemically ordered state -- but with intrinsic randomness, such that the orientations of the stacked molecules can be arbitrary and the sizes of the crystals different, forming aggregate structures that are highly disordered. That combination of order and disorder contributes to eumelanin's broadband absorption, the team found.

"It's a naturally existing nanocomposite," Buehler says, "that has very critical macroscopic properties as a result of the nanostructure."

While eumelanin molecules all share a basic chemistry, more than 100 variations of that composition exist; the slight variations from one molecule to another may contribute to the disorder that broadens the ability to absorb light, Buehler says. "The jury is still out on which is more important," he says.

Understanding the origins of eumelanin's optical properties could help guide the creation of new synthetic materials, Buehler says. These insights may be useful in developing materials for applications such as pigments, he says, or in improving the efficiency of solar cells.

While this analysis still leaves open questions about the precise structure of eumelanin molecules, Buehler says, "Building an accurate structural model is one of our big aims."

A similar combination of computational modeling based on quantum mechanics, molecular dynamics, and direct observation using electron microscopy "can probably be applied to many systems," Buehler says. "It's a methodological advance that is validated because this system has such unique optical properties, which we can reproduce. It shows the method can be useful."

Sergei Tretiak, a researcher in the theoretical division at Los Alamos National Laboratory who was not involved in this research, says that understanding eumelanin's structure was "a scientific puzzle for a long time." The new work, he says, "provides a somewhat unexpected answer to this conundrum." The researchers' approach, he says, "exemplifies a multidisciplinary approach to [a] complex problem, where a single method is unable to provide a satisfying answer."


Story Source:

The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by David L. Chandler. Note: Materials may be edited for content and length.


Journal Reference:

  1. Chun-Teh Chen, Chern Chuang, Jianshu Cao, Vincent Ball, David Ruch, Markus J. Buehler. Excitonic effects from geometric order and disorder explain broadband optical absorption in eumelanin. Nature Communications, 2014; 5 DOI: 10.1038/ncomms4859

New details on microtubules and how the anti-cancer drug Taxol works

 

May 22 / 2014

DOE/Lawrence Berkeley National Laboratory

Images of microtubule assembly and disassembly have been produced by researchers at the unprecedented resolution of 5 angstroms, providing new insight into the success of the anti-cancer drug Taxol and pointing the way to possible improvements. "This is the first experimental demonstration of the link between nucleotide state and tubulin conformation within the microtubules and, by extension, the relationship between tubulin conformation and the transition from assembled to disassembled microtubule structure," says a biophysicist on the study.


The most detailed look ever at the assembly and disassembly of microtubules, tiny fibers of tubulin protein that play a crucial role in cell division, provides new insight into the success of the anti-cancer drug Taxol.

A pathway to the design of even more effective versions of the powerful anti-cancer drug Taxol has been opened with the most detailed look ever at the assembly and disassembly of microtubules, tiny fibers of tubulin protein that form the cytoskeletons of living cells and play a crucial role in mitosis. Through a combination of high-resolution cryo-electron microscopy (cryo-EM) and new methodology for image analysis and structure interpretation, researchers with the Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley have produced images of microtubule assembly and disassembly at the unprecedented resolution of 5 angstroms (Å). Among other insights, these observations provide the first explanation of Taxol's success as a cancer chemotherapy agent.

"This is the first experimental demonstration of the link between nucleotide state and tubulin conformation within the microtubules and, by extension, the relationship between tubulin conformation and the transition from assembled to disassembled microtubule structure," says Eva Nogales, a biophysicist with Berkeley Lab's Life Sciences Division who led this research. "We now have a clear understanding of how hydrolysis of guanosine triphosphate (GTP) leads to microtubule destabilization and how Taxol works to inhibit this activity."

Nogales, who is also a professor of biophysics and structural biology at UC Berkeley, as well as an investigator with the Howard Hughes Medical Institute, is the corresponding author of a paper describing this research in the journal Cell. The paper is entitled "High resolution αβ microtubule structures reveal the structural transitions in tubulin upon GTP hydrolysis." Co-authors are Gregory Alushin, Gabriel Lander, Elizabeth Kellogg, Rui Zhang and David Baker.

During mitosis, the process by which a dividing cell duplicates its chromosomes and distributes them between two daughter cells, microtubules disassemble and reform into spindles across which the duplicate sets of chromosomes migrate. For chromosome migration to occur, the microtubules attached to them must disassemble, carrying the chromosomes in the process. The crucial ability of microtubules to transition from a rigid polymerized or "assembled" state to a flexible depolymerized or "disassembled" state -- called "dynamic instability" -- is driven by GTP hydrolysis in the microtubule lattice. Taxol prevents or dramatically slows down the unchecked cell division that is cancer by binding to a microtubule in such a manner as to block the effects of hydrolysis. However, until now the atomic details as to how microtubules transition from polymerized to depolymerized structures and the role that Taxol can play have been sketchy.

"Uncovering the atomic details of the conformational cycle accompanying polymerization, nucleotide hydrolysis, and depolymerization is essential for a complete description of microtubule dynamics," Nogales says. "Such details should significantly aid in improving the potency and selectivity of existing anti-cancer drugs, as well as facilitate the development of novel agents."

To find these details, Nogales, an expert in electron microscopy and image analysis and a leading authority on the structure and dynamics of microtubules, employed cryo-EM, in which protein samples are flash-frozen at liquid nitrogen temperatures to preserve their natural structure. Using an FEI 300 kV Titan cryo-EM from the laboratory of Robert Glaeser, she and her colleagues generated cryo-EM reconstructions of tubulin proteins whose structures were either stabilized by GMPCPP, a GTP analogue, or were unstable and bound to guanosine diphosphate (GDP), or were bound to GDP but stabilized by the presence of Taxol.

The tubulin protein is a heterodimer consisting of alpha (α) and beta (β) monomer subunits. It features two guanine nucleotide binding sites, an "N-site" on the α-tubulin that is buried, and an "E-site" on the β-tubulin that is exposed when the tubulin is depolymerized. Previous microtubule reconstruction studies were unable to distinguish the highly similar α-tubulin and β-tubulin from each other.

"To be able to distinguish the α-tubulin from the β-tubulin, we had to resolve our images at better than 8 Å, which most prior cryo-EM studies were unable to do," Nogales says. "For that, we marked the subunits with kinesin, a protein motor that distinguishes between α- and β-tubulin."

Nogales and her colleagues found that GTP hydrolysis and the release of the phosphate (GTP becomes GDP) leads to a compaction of the E-site and a rearrangement of the α-tubulin monomer that generates a strain on the microtubule that destabilizes its structure. Taxol binding leads to a reversal of this E-site compaction and α-tubulin rearrangement that restores structural stabilization.

"Remarkably, Taxol binding globally reverses the majority of the conformational changes we observe when comparing the GMPCPP and GDP states," Nogales says. "We propose that GTP hydrolysis leads to conformational strain in the microtubule that would be released by bending during depolymerization. This model is consistent with the changes we observe upon taxol binding, which dramatically stabilizes the microtubule lattice. Our analysis supports a model in which microtubule-stabilizing agents like Taxol modulate conformational strain and longitudinal contacts in the microtubule lattice."


Story Source:

The above story is based on materials provided by DOE/Lawrence Berkeley National Laboratory. The original article was written by Lynn Yarris. Note: Materials may be edited for content and length.


Journal Reference:

  1. Gregory Alushin, Gabriel Lander, Elizabeth Kellogg, Rui Zhang and David Baker. High resolution αβ microtubule structures reveal the structural transitions in tubulin upon GTP hydrolysis. Cell, May 2014

Brazilian wandering spider – Deadliest spider in the world

 

Spider wasp

Spider wasp “towing” its prey (photo: “Bugguide.net”)

On a bicycle ride today I had my second encounter with a Brazilian wandering spider.

It was a scene that seemed to come right out of a Animal planet documentary. I had stopped to have a snack, when something caught my eye: about 10m in front of me I saw something that looked like a huge insect crossing the street. I went closer to check it out and to my surprise it was a big black wasp “towing” an even bigger spider to the other side of the road.

I immediately remembered a documentary I once saw about these wasps (spider wasp, Tarantula hawk, Pepsis wasp), that sting the spider to sedate it in order to stay fresh and take it back to the nest, where they deposit an egg on the spider’s body. When the larva emerges from the egg, it has a juicy, still alive, but harmless spider to feed on. Nature sure seems a little gruesome sometimes, no?

I wanted to take a picture with my (very old) cellphone before the duo disappeared in the grass on the other side of the street, but when I got too close, the wasp flew off and it kept flying around me for a few minutes like a fighter jet  to scare me away.

Brazilian wandering spider

The sedated Brazilian wandering spider as it was sitting on the side of the road.

When I finally got my phone out and took a (Ok, kinda crappy) picture of the spider – which was moving a little but wasn’t going anywhere since it was sedated – the wasp was gone, probably off to find another victim. It’s weird, but I actually felt sorry for the defenseless spider.

A fairly large section of the “first aid” chapter of the guiding course I took in Rio de Janeiro, was about snakes and spiders, and at first sight, I had a hunch that this was a Brazilian wandering spider (known in Brazil under the name “Armadeira”) , which is considered the most venomous and deadly spider in the world. Apart from being able to kill a human, the venom also has a very uncommon side effect: it can cause very long lasting but painful erections (priapism).

The Brazilian wandering spider is all the more dangerous because it doesn’t run away when approached, but will aggressively – and very fast – attack anything it considers a threat, and that includes humans. These spiders live in highly populated areas in Brazil and are often found inside houses. They are one of the reasons why you should always check your shoes or clothes before putting them on.

So, with the wasp gone, I was left alone with my Brazilian wandering spider, and I decided to take it home. One of my three water bottles was empty, so I put the unfortunate creature in there for the 50km ride back to Volta Redonda.

Once home, I discovered that the spider had some kind of wound to its abdomen (probably from being dragged over the hot asphalt) and had been loosing fluid, which made the abdomen look a little deflated. I had hoped to keep it for a while, to see whether it eventually would “wake up” from its sedation, but now I guess that will not happen anymore. (RIP)

 

Brazilian wandering spider

Brazilian wandering spider: those fangs are impressive and have no problem puncturing human skin. this is one critter to be VERY careful around.

Brazilian wandering spider

Brazilian wandering spider – The ruler shows this one is about 10cm, but they can grow up to 17cm.

This was yet another reminder for me of the more dangerous side of Brazil, and that these Brazilian wandering spiders, among other venomous animals, like snakes, are indeed out there and something to be reckoned with.