terça-feira, 3 de fevereiro de 2015

Whose numbers determine if a targeted cancer therapy is 'worth it? '

 

Health economics helps insurers, health care systems and providers make treatment decisions based on the cost of extra "units" of health arising from a specific treatment. By calculating the cost for each year of life or quality-adjusted year of life gained, these groups can decide whether changing treatments or adding in a new treatment beyond the existing standard of care is "worth it."

However, while the resulting incremental cost effectiveness ratio (ICER) is often presented as an absolute measure upon which to base these decisions, an opinion published by University of Colorado Cancer Center researchers D. Ross Camidge, MD, PhD, and Adam Atherly, PhD, suggests that the consumers of these data need to be much more aware of the assumptions underlying these calculations.

"Increasingly physicians are being presented with health economic analyses in mainstream medical journals as a means of potentially influencing their prescribing. However, it is only when you understand the multiple assumptions behind these calculations that you can see that they are by no means absolute truths," Camidge says.

Take the case of targeted treatment for non-small cell lung cancer (NSCLC). Treating ALK+ lung cancer with crizotinib rather than with either 1st or 2nd line chemotherapy controls patients' cancer longer and with fewer side effects -- but is it cost effective?

One of the most prominent health economic assessments of crizotinib occurred within a Canadian study which showed that introducing crizotinb for ALK+ lung cancer would cost $255,970 per quality-adjusted life year gained (taking into account screening for the ALK abnormality, the cost of the drug, its benefits and side effects compared to standard second-line chemotherapy). This is above the threshold most insurers will pay and so according to this ICER, it seems as if treating ALK+ lung cancer with crizotinib is not necessarily worth it.

"But what if we change some of the assumptions used in the Canadian model?" Camidge and Atherly ask.

For example, based on reports of people returning to normal lives after treatment with crizotinib, what if the assumption is that the drug offers 90 percent quality of life, as opposed to the approximate 50 percent quality of life assumed in the Canadian study? If crizotinib returns a patient to 90 percent quality of life, as has been common, the cost effectiveness ratio of the drug drops dramatically to $143,421 per quality-adjusted life year gained.

Similarly, as physicians become more familiar with the best use of a new drug and more informed about the diseases it treats, assumptions of a drug's benefit used in an ICER may also change. For example, in the data used in the Canadian study only the 2nd line use of the drug was explored, but benefit in the 1st line setting now seems to be greater. If use of the drug results in more quality-adjusted life years gained, the drug is less expensive per unit.

Likewise, what if our evolving understanding of who and how we test for the ALK rearrangement makes us able to find this gene alteration in a greater percentage of the screened population? Perhaps early testing cost $50,000 to test 50 people to find one case of ALK+ lung cancer. That's a cost of $50,000 to find one treatable case. But if optimized testing procedures are able to discover five cases of ALK+ cancer per every 50 tests, the testing cost per treatable case drops to $10,000.

As each component in the overall calculation is explored, Camidge and Atherly show that the resulting ICER can change to a greater or lesser extent.

"Something that might seem clear-cut from the outside really gets tricky and much less definite when you pull it apart," Atherly says. "The cost per unit of health that is used to determine if a drug is or isn't used seems like an unequivocal fact, but is often highly equivocal."

"With multiple other examples of giving specific targeted drugs to specific molecular subtypes of disease occurring, it is becoming vitally important to accurately address the health economics of these personalized medicine scenarios. For if we don't address the feasibility of actually delivering these breakthroughs to patients in the real world they will not be breakthroughs at all," they write.


Story Source:

The above story is based on materials provided by University of Colorado Cancer Center. Note: Materials may be edited for content and length.


Journal Reference:

  1. Camidge DR, Atherly AJ. Buyer beware: understanding the assumptions behind health economic assessments in personalized cancer care. J Thorac Dis., December 2014 DOI: 10.3978/j.issn.2072-1439.2014.12.41

 

Toward the next biofuel: Secrets of Fistulifera solaris

 

February 2, 2015

American Society of Plant Biologists

Biofuels are an attractive alternative to fossil fuels, but a key challenge in efforts to develop carbon-neutral, large-scale methods to produce biofuels is finding the right organism for the job. One emerging candidate is the microalga Fistulifera solaris. An international collaboration of scientists has revealed the genome of F. solaris and provided exciting hints at the roots of its ability to grow and produce oil at the same time.


Biofuels made from plant-produced oils are an attractive alternative to fossil fuels. However, the enormous amount of arable land needed for production and the competition between their uses as food/feed and fuel present obstacles to the production of biofuels from crops. These considerations have led to focus on microalgae as oil producers. Microalgae are tiny photosynthetic organisms found in both ocean water and freshwater. They grow quickly in liquid culture and can produce high levels of oils. In fact, the omega-3 fatty acids present in fish are actually produced by microalgae that are eaten by the fish. Institutions throughout the world have generated collections of wild microalgae in efforts to find species with desirable characteristics.

One such microalga is a species of diatom called Fistulifera solaris, which is emerging as a promising candidate for next-generation biofuel technology. Diatoms are microscopic algae that are major contributors to marine ecosystems; they are also the basis of diatomaceous earth, which is used by gardeners as a natural pest deterrent. Not only does F. solaris grow quickly and produce high levels of oils, it does both at the same time, unlike other oil-producing microalgae that produce their highest amounts of oil at stages when they grow slowly, if at all. These characteristics make F. solaris an excellent candidate for batch culture (see figure) to produce biomass from which oil for biofuels can be harvested.

F. solaris was originally isolated from samples taken at the junction of two rivers in Japan. A collaboration of scientists in Japan and France aimed to elucidate the molecular underpinnings of simultaneous growth and oil production by sequencing the genome of F. solaris and also cataloguing the transcriptome -- providing a read-out of all genes expressed at a given time. Lead scientist Dr. Tsuyoshi Tanaka of the Division of Biotechnology and Life Science in the Institute of Engineering at Tokyo University of Agriculture and Technology, highlights the need for this information, saying "Biofuel production using photosynthetic organisms such as microalgae is one of the most promising approaches to generating sustainable energy. However, the molecular functions of organisms such as oleaginous microalgae remain unclear, thus hampering efforts to improve productivity." Tokyo University of Agriculture and Technology.


Story Source:

The above story is based on materials provided by American Society of Plant Biologists. Note: Materials may be edited for content and length.


Journal Reference:

  1. Tsuyoshi Tanaka, Yoshiaki Maeda, Alaguraj Veluchamy, Michihiro Tanaka, Heni Abida, Eric Maréchal, Chris Bowler, Masaki Muto, Yoshihiko Sunaga, Masayoshi Tanaka, Tomoko Yoshino, Takeaki Taniguchi, Yorikane Fukuda, Michiko Nemoto, Mitsufumi Matsumoto, Pui Shan Wong, Sachiyo Aburatani, Wataru Fujibuchi. Oil Accumulation by the Oleaginous DiatomFistulifera solarisas Revealed by the Genome and Transcriptome. The Plant Cell Online, 2015; tpc.114.135194 DOI: 10.1105/tpc.114.135194

 

New technique doubles the distance of optical fiber communications

 

February 3, 2015

University College London - UCL

A new way to process fibre optic signals could double the distance at which data travels error-free through transatlantic sub-marine cables. The new method has the potential to reduce the costs of long-distance optical fibre communications as signals wouldn't need to be electronically boosted on their journey, which is important when the cables are buried underground or at the bottom of the ocean.


Optical fiber (stock image).

A new way to process fibre optic signals has been demonstrated by UCL researchers, which could double the distance at which data travels error-free through transatlantic sub-marine cables.

The new method has the potential to reduce the costs of long-distance optical fibre communications as signals wouldn't need to be electronically boosted on their journey, which is important when the cables are buried underground or at the bottom of the ocean.

As the technique can correct the transmitted data if they are corrupted or distorted on the journey, it could also help to increase the useful capacity of fibres. This is done right at the end of the link, at the receiver, without having to introduce new components within the link itself. Increasing capacity in this way is important as optical fibres carry 99% of all data and demand is rising with increased use of the internet, which can't be matched by the fibres' current capacity, and changing the receivers is far cheaper and easier than re-laying cables.

To cope with this increased demand, more information is being sent using the existing fibre infrastructure with different frequencies of light creating the data signals. The large number of light signals being sent can interact with each other and distort, causing the data to be received with errors.

The study published in Scientific Reports today and sponsored by the EPSRC reports a new way of improving the transmission distance, by undoing the interactions that occur between different optical channels as they travel side-by-side over an optical cable.

Study author Dr Robert Maher (UCL Electronic & Electrical Engineering), said: "By eliminating the interactions between the optical channels, we are able to double the distance signals can be transmitted error-free, from 3190km to 5890km, which is the largest increase ever reported for this system architecture. The challenge is to devise a technique to simultaneously capture a group of optical channels, known as a super-channel, with a single receiver. This allows us to undo the distortion by sending the data channels back on a virtual digital journey at the same time."

The researchers used a '16QAM super-channel' made of a set of frequencies which could be coded using amplitude, phase and frequency to create a high-capacity optical signal. The super-channel was then detected using a high-speed super-receiver and new signal processing techniques developed by the team enabled the reception of all the channels together and without error. The researchers will now test their new method on denser super-channels commonly used in digital cable TV (64QAM), cable modems (256QAM) and Ethernet connections (1024QAM).

Study author Professor Polina Bayvel (Electronic & Electrical Engineering) who is Professor of Optical Communications and Networks and Director of UNLOC, said: "We're excited to report such an important finding that will improve fibre optic communications. Our method greatly improves the efficiency of transmission of data -- almost doubling the transmission distances that can be achieved, with the potential to make significant savings over current state-of-the art commercial systems. One of the biggest global challenges we face is how to maintain communications with demand for the Internet booming -- overcoming the capacity limits of optical fibres cables is a large part of solving that problem."

The future of holographic video

 

 

A waveguide device for a holographic video monitor under construction at BYU.

Holographic video displays, featuring three-dimensional images, are about to "go large" and become a lot more affordable at the same time, thanks to the work of a team of Brigham Young University (BYU) researchers and their collaborators at Massachusetts Institute of Technology (MIT).

It's all about manipulating light. Three of the primary methods include: reflection, refraction and diffraction. In this case, diffraction is the key, and essentially enables lines -- almost any type -- to bend and filter light.

In the journal Review of Scientific Instruments, from AIP Publishing, the team reports using surface acoustic waves as a dynamic pattern of lines to control light's angle and color composition.

How does it work? The magic happens on the surface of a special crystal called lithium niobate (LiNbO3), which boasts excellent optical properties. Beneath the surface of the LiNbO3, microscopic channels, or "waveguides," are created to confine light passing through. A metal electrode is then deposited onto each waveguide, which can produce surface acoustic waves.

The resulting frequency division of color enables a new type of color display. This means that "for a wavelength display, we don't need to rely on color filter wheels or dedicated red and blue pixels," explained Daniel E. Smalley, assistant professor of electrical engineering at BYU, who first reported an advance in this realm in Nature in 2013, while he was a graduate student working at MIT with his advisor V. Michael Bove.

Instead of a color wheel, any color combination is possible with their approach simply by altering the frequency of the signal sent to the "white waveguide pixel." In other words, Smalley said, "we can color the output of our display by 'coloring' the frequencies of the drive signal."

"As a bonus, this interaction also rotates the polarization of the signal light so that we can use a polarizer to eliminate any noise in the system," he added.

In terms of applications, the team's technology adapts and combines techniques from telecom and integrated optics in a way that makes it much less expensive than previous approaches. "We can use this technology to make simple and inexpensive color waveguide displays -- including inexpensive holographic video displays," Smalley pointed out. "This can drop the cost of a holographic video display from tens of thousands of dollars to less than a thousand."

Holograms are meant to be large. Now that there's a simple and inexpensive color display technology, Smalley and colleagues are working on ways to use it to create large holographic video displays -- on the scale of room-sized displays.


Story Source:

The above story is based on materials provided by American Institute of Physics (AIP). Note: Materials may be edited for content and length.


Journal Reference:

  1. Andre Henrie, Benjamin Haymore, and Daniel E. Smalley. requency Division Color Characterization Apparatus for Anisotropic Leaky Mode Light Modulators. Review of Scientific Instruments, February 3, 2015 DOI: 10.1063/1.4906329

 

Industrial pump inspired by flapping bird wings

 

 


When a fluid is squeezed and expanded repeatedly between two sawtooth-like boundaries, a net flow is generated to the right.

Birds are unwitting masters of fluid dynamics -- they manipulate airflow each time they flap their wings, pushing air in one direction and moving themselves in another. Two New York University researchers have taken inspiration from avian locomotion strategies and created a pump that moves fluid using vibration instead of a rotor. Their results will be published February 3, 2015, in the journal Applied Physics Letters, from AIP Publishing.

"When we use a household pump, that pump is very likely a centrifugal pump. It uses a high-speed rotor to move water by throwing it from the pump's inlet to the outlet," explained Benjamin Thiria, who carried out the work in collaboration with Jun Zhang.

Instead of a rotor, Thiria and Zhang's design has teeth. Two asymmetrically sawtoothed panels, placed with their teeth facing each other, create a channel that can rapidly open and close. Water rushes into the channel when it expands and is forced out when it contracts.

"When a fluid is squeezed and expanded repeatedly, the asymmetric boundary forces the fluid to move in one direction," said Zhang. The repeated vibration of the channel drives fluid transport because the asymmetry of the ratchet's teeth makes it easier for the fluid to move with them than against them.

The pump could be particularly useful in industrial situations where machinery is vibrating excessively and therefore operating inefficiently. Because it is powered by vibration, it could capture some of the wasted mechanical energy and instead use it for a productive task like circulating coolant. It would also dampen the noise that vibrating machinery tends to emit.

In the future, Thiria and Zhang hope to find other examples of similar pumps in nature -- such as the human circulatory system -- and use them to further optimize their own design.

"For many years, fluid-structure interaction has been the most important subject for scientists working in fluid physics," said Thiria, who now conducts research at ESPCI ParisTech. "Our pump shows that surprising results come from fluid-structure interaction."


Story Source:

The above story is based on materials provided by American Institute of Physics (AIP). Note: Materials may be edited for content and length.


Journal Reference:

  1. Benjamin Thiria and Jun Zhang. Ratcheting Fluid with Geometric Anisotropy. Applied Physics Letters, February 2, 2015 DOI: 10.1063/1.4906927

 

Dear Colleague Letter: SEES: Interactions of Food Systems with Water and Energy Systems (nsf15040)

 

NSF 15-040

Dear Colleague Letter: SEES: Interactions of Food Systems with Water and Energy Systems

February 2, 2015

Dear Colleagues:

NSF established the Science, Engineering, and Education for Sustainability (SEES) investment area in 2010 to lay the research foundation for decision capabilities and technologies aimed at mitigating and adapting to environmental changes that threaten sustainability. SEES investments advance a systems-based approach to understanding, predicting, and reacting to stress upon and changes in the linked natural, social, and built environments. In this context, the importance of understanding the interconnected and interdependent systems involving food, energy, and water (FEW) has emerged. Through this Dear Colleague Letter (DCL), the NSF aims to accelerate fundamental understanding and stimulate basic research on systems that extend beyond the interests of the SEES Water Sustainability and Climate (WSC) program to include couplings to energy and food systems where the NSF already has established presence.

Water and energy are critical for agriculture and food production. In addition, many factors - including changing land-use practices; increased urbanization; population growth and distribution; changing demand and consumer preferences; water contamination; and climate variability - create stresses on water, energy, and agriculture resources and systems in multiple and sometimes unexpected ways. These multifaceted interactions among food, energy, and water systems function according to fundamental scientific principles that govern the coupling of various physical, chemical, biological and social processes. There is a critical need to enhance understanding of the couplings within these complex systems and how they determine the systems-level response of the FEW system. a need for basic research to enable foundational technologies critical to the safety, security, productivity, and resilience of the FEW system and to pursue sustained cyberinfrastructure (data, software, and computational resources) that will support these activities and advances. The NSF supports basic research in nearly all key scientific and engineering disciplines which can further the understanding of these physical, chemical, biological, and social interactions, as well as the integration of heterogeneous data and uncertainties. In addition, the NSF supports the building of knowledge and educational advances to foster a broad and diverse next generation workforce.

The NSF defines the FEW system very broadly, incorporating physical processes (such as new technologies for more efficient resource utilization), natural processes (such as biogeochemical and hydrologic cycles), biological processes (such as agroecosystem structure and productivity), social/behavioral processes (such as decision making and governance), and cyber elements. Understanding these complex, dynamic coupled systems will require new or enhanced partnerships across many disciplinary research communities.

The NSF requests innovative proposals in the form of (1) supplements, to build upon existing NSF-funded research activities; or (2) conferences of typically 30-80 attendees that stimulate debate, discussion, visioning and collaboration across research communities, and enable a higher appreciation, visualization and understanding of food systems and their couplings to energy and water systems. Such conferences are typically identified as "workshops" and will hereafter be referred to as simply "workshops". All NSF Directorates/Office listed below are interested in receiving inquiries. These proposals should address the coupled nature of the food, energy, and water system and the interdisciplinary dimensions of physical, natural, biological, cyber, and social/behavioral processes of relevance.

Workshop proposals should facilitate and enable interdisciplinary partnerships among natural science, physical science, social science, computing and engineering researchers and develop innovative, interdisciplinary research approaches to understanding the FEW system. Workshop projects should culminate in deliverable white papers that define scientific, engineering and data challenges in understanding the FEW system. In addition to academic researchers, workshop participants may include scientists, engineers, educators, and practitioners from industry, local, state, and federal agencies (e.g. EPA, DOE, USDA, USGS, NOAA). For any potential NSF follow-on effort in the FEW system, NSF anticipates some Federal agency partner participation.

Workshop proposals may be submitted to any appropriate program, with prior approval of the program's manager, and may be additionally discussed by other relevant programs. Prior to submitting a proposal the PI must contact one of the individuals listed below to ensure that the proposal fits the goals of this DCL. PIs will then be directed to appropriate Program Directors for submission through the normal submission process outlined in the NSF Grants Proposal Guide. Workshop proposal budgets must be less than a total of $100,000. The title of workshop proposals submitted under this DCL should begin with "FEW."

Supplements to existing NSF active grants may be proposed with prior permission of the appropriate managing Program Officer. These requests must enhance existing projects by incorporating or exploring the concepts described in this DCL. For example, a project focusing on energy and water might propose to add a component related to food production. In addition, proposed supplements may provide an opportunity to broaden the project's interdisciplinary dimensions to incorporate physical, natural, biological, cyber, and social/behavioral processes of relevance. All supplement requests must include costs associated with use of facilities or other infrastructure. Supplements whose focus is to foster and strengthen interaction among scientists, engineers, and educators, to advance research or education in the FEW system, across disciplinary, organization, geographic, and international boundaries, will also be considered.

Workshop proposals and supplement requests must be submitted by March 30, 2015, for consideration. Workshop proposals should focus their activities and deliverables in the September to December 2015 timeframe. For supplements that foster new collaborations and partnerships to address interdisciplinary topics, it is strongly encouraged to have initial activities during 2015. Proposals or requests where PIs have not contacted the relevant program officers, as described in this DCL, will be returned without consideration.

MPS will also consider EAGERs following specific discussion with the MPS point of contact below. Investigators are encouraged to review the six "bottleneck" areas of research identified in the July of 2014 report of the Mathematical and Physical Sciences Advisory Committee - Subcommittee on Food Systems "Food, Energy and Water: Transformative Research Opportunities in the Mathematical and Physical Sciences." This report can be found at: http://www.nsf.gov/mps/advisory/mpsac_other_reports/nsf_food_security_report_review_final_rev2.pdf

Points of contact for participating Directorates:

ENG: JoAnn Lighty, FEW Working Group co-Chair
Division Director
Division of Chemical, Bioengineering, Environmental, & Transport Systems

GEO: Thomas Torgersen, FEW Working Group co-Chair
Program Officer, Division of Earth Sciences

BIO: Alan Tessier
Deputy Division Director (Acting), Division of Environmental Biology

SBE: Leah Nichols
Program Officer, Division of Behavioral and Cognitive Sciences

OIIA: Audrey Levine
Program Officer, Experimental Program to Stimulate Competitive Research

MPS: Colby Foss
Program Officer, Division of Chemistry

CISE: David Corman
Program Officer, Division of Computer and Network Systems

EHR: Amy Chan Hilton
Program Officer, Division of Undergraduate Education

Long-lasting, water-based nuclear battery developed

 

A new nuclear-powered, water-based battery may one day be used as a dependable power suppl...

A new nuclear-powered, water-based battery may one day be used as a dependable power supply in vehicles, spacecraft, and other applications

Researchers working at the University of Missouri (MU) claim to have produced a prototype of a nuclear-powered, water-based battery that is said to be both longer-lasting and more efficient than current battery technologies and may eventually be used as a dependable power supply in vehicles, spacecraft, and other applications where longevity, reliability, and efficiency are paramount.

"Betavoltaics, a battery technology that generates power from radiation, has been studied as an energy source since the 1950s," said associate professor Jae W. Kwon, of the College of Engineering at MU. "Controlled nuclear technologies are not inherently dangerous. We already have many commercial uses of nuclear technologies in our lives including fire detectors in bedrooms and emergency exit signs in buildings."

Utilizing the radioactive isotope strontium-90 to enhance the electrochemical energy produced in a water-based solution, the researchers have incorporated a nanostructured titanium dioxide electrode acting as a catalyst for water decomposition. That is, the catalyst assists the breakdown of water in conjunction with the applied radiation into assorted oxygen compounds.

As a result, when high-energy beta radiation passes through the platinum and the nanoporous titanium dioxide, electron-hole pairs are produced within the titanium dioxide, creating an electron flow and a resultant electric current.

"Water acts as a buffer and surface plasmons created in the device turned out to be very useful in increasing its efficiency," Kwon said. "The ionic solution is not easily frozen at very low temperatures and could work in a wide variety of applications including car batteries and, if packaged properly, perhaps spacecraft."

By no means the first-ever nuclear battery – the NanoTritium device from City Labs being one recent notable example – this is the first nuclear battery that has been produced to exploit the inherent advantages of radiolysis (water-splitting with radiation) to produce an electric current, at higher energy levels and lower temperatures than previously possible. And at much greater claimed efficiencies than other water-splitting energy production techniques.

This is because, unlike other forms of photocatalytic methods of water-splitting to produce energy, the high-energy beta radiation in the MU device produces free radicals in water such that the kinetic energy is recombined or trapped in water molecules so that the radiation can be converted into electricity – using the platinum/titanium dioxide electrode previously described – to achieve water splitting efficiently and at room temperature.

As a result, whilst solar cells use a similar mechanism for the transference of energy via hole-electron pairs, very few free radicals are produced because the photon energies are principally in the visible spectrum and subsequently at lower levels of energy.

Beta radiation produced by the strontium source, on the other hand, with its ability to enhance the chemical reactions involving free radicals at greater electron energy levels, is a much more efficient way to produce extremely long-lasting and reliable energy. So much so, that the water-based nuclear battery may well offer a viable alternative to the solar cell as a sustainable, low-pollution energy source.

The MU team’s research was published in the journal Nature.

Source: University of Missouri

 

Flexible graphene-based LED clears the way for flexible displays

 

 

Researchers have created a new graphene-based flexible LED display prototype (not pictured...

Researchers have created a new graphene-based flexible LED display prototype (not pictured) that is incredibly thin and bright and could be used in next-gen mobile phones, tablets and TVs (Image: Shutterstock)

Researchers from the University of Manchester and University of Sheffield have developed a new prototype semi-transparent, graphene-based LED device that could form the basis of flexible screens for use in the next-generation of mobile phones, tablets and televisions. The incredibly thin display was created using sandwiched "heterostructures", is only 10-40 atoms thick and emits a sheet of light across its entire surface.

The University of Manchester has a long history of working with graphene, with Sir Andre Geim and Sir Kostya Novoselov first isolating the material of single-atom thickness at the University via mechanical exfoliation (using adhesive tape) back in 2004. Since then, research has also branched out into other promising 2D material structures, including boron nitiride and molybdenum disulphide.

The culmination of these areas of experimentation is the new 2D LED semiconductor built by a team led by Novoselov using a combination of metallic graphene, insulating hexagonal boron nitride and various semiconducting monolayers. It is this construction using LEDs engineered at an atomic level that allowed the team to produce their breakthrough device. As such, the work shows that graphene (combined with other flexible 2D materials) is not just limited to simple electronic displays, but could be exploited to create light emitting devices that are not only incredibly thin, but flexible, semi-transparent, and intrinsically bright.

"By preparing the heterostructures on elastic and transparent substrates, we show that they can provide the basis for flexible and semi-transparent electronics," said Novoselov.

Schematic of the single quantum-well (SQW) heterostructure hBN/GrB/2hBN/WS2/2hBN/GrT/hBN (...

This inherent elasticity and translucence also means that the device shows promise as an alternative to current LCD or conventional LED technology, potentially offering everything from modest lighting products to multifaceted graphical displays.

"We envisage a new generation of optoelectronic devices to stem from this work, from simple transparent lighting and lasers and to more complex applications," said Freddie Withers, Royal Academy of Engineering Research Fellow at The University of Manchester.

The heterostructures (or van der Waals heterostructures, to give them their full name) are made by joining different materials, usually in layers, and with the materials joined directly at the atomic level. The heterostructures used in the new device essentially create an electron attractive force that the researchers have used to construct quantum wells to control the movement of electrons and make the device emit light.

In effect, these quantum wells (where electrons and "holes" both see a lower energy in the "well" layer, hence the name) use their special properties for the confinement of charge carriers (the electrons and holes) in thin layers at a quantum level. A subsequent change in energy states as exchanges are made in potential produces photons and, therefore, light.

"The range of functionalities for the demonstrated heterostructures is expected to grow further on increasing the number of available 2D crystals and improving their electronic quality," said Novoselov.

According to the researchers, the reliability and stability of the materials in the prototype device also shows promise for future commercial manufacturing and applications.

"The novel LED structures are robust and show no significant change in performance over many weeks of measurements," added Professor Alexander Tartakovskii, from The University of Sheffield. "Despite the early days in the raw materials manufacture, the quantum efficiency (photons emitted per electron injected) is already comparable to organic LEDs."

The research was recently published in the journal Nature Materials.

Source: University of Manchester

 

Aventicum watch comes with a tiny gold Roman emperor

 

 

The Aventicum uses a mirascope to produce the illusion of a floating emperor

The Aventicum uses a mirascope to produce the illusion of a floating emperor

Image Gallery (12 images)

It's not unusual to get a free whistle in a box of cereal, but what about a gold bust of a Roman emperor in a wristwatch? That may sound a bit out there, but upmarket Swiss watchmaker Christophe Claret's Aventicum watch not only has a Roman theme, but also a tiny engraved golden bust of Emperor Marcus Aurelius that seems to float over the center of the dial.

According to Christophe Claret, the inspiration for the Aventicum was Claret's production of a 3D documentary in 2013 about the Avenches Roman Museum's excavations of Aventicum; the capital of the ancient Roman province of Helvetia (Switzerland) and now a stretch of half-buried ruins near the Swiss town of Avenches. Over the decades, the museum has been conducting excavations and among the treasures in its collection is a gold bust of Marcus Aurelius that was discovered in 1939 while cleaning a pipe. One of only three busts of its type known to exist, a tiny micro-engraved replica measuring only 3 mm is the centerpiece of the Aventicum watch.

The hologram-like image of the bust is due to the mirascope that Claret says is the first installed in a wristwatch. It's a common novelty shop illusion that produces a magnified image that seems to float above a small hole in the center of the dial. It's produced by two parabolic mirrors set atop one another to form an elliptical cross section. In the top section, there's a hole in the center. An object placed in the bottom section will show an enlarged image in the hole that, though twice as large, is so realistic that it seems possible to touch it.

Gold bust of Emperor Marcus Aurelius found at Aventicum

Because the illusion takes up where the watch's hub would normally be, the hands have been replaced by a pair of rings on the other edge of the dial. On these are carbon-fiber markers to point to the hours and minutes, and are counterweighted to maintain precision.

All of this is set in the red or palladium-rich white gold and anthracite, PVD-treated, grade 5 titanium case that's water resistant to 3 ATM (30 m, 100 ft). The AVE15 automatic movement with 4 Hz Swiss escapement has 28 jewels and 186 parts, and gets its 72-hr power reserve from its twin barrels.

The Aventicum in red gold showing the reverse

On the reverse of the watch is inscribed a saying from Aurelius: "Perfice Omnia facta vitae quasi haec postrema essent" (Perform every act in life as though it were your last), and in the middle is the transparent sapphire-winding rotor. The latter has five numbered Romano-Gaulish racing chariots that spin independently. Held horizontally, a swish of the wrist makes them race around with the first one to pass the A in Aventicum the winner.

The Aventicum is available in two limited editions with 68 units in red gold and 38 in white; priced CHF 49,000 (US$52,800) and CHF 53,000 (US$57,100) respectively.

Source: Christophe Claret SA via A Blog to Watch

 

'Live fast, die young' galaxies lose the gas that keeps them alive

 

This is an image showing galaxy J0836, the approximate location of the black hole residing at the galaxy's core, and the expelled gas reservoir.

Galaxies can die early because the gas they need to make new stars is suddenly ejected, research published today suggests.

Most galaxies age slowly as they run out of raw materials needed for growth over billions of years. But a pilot study looking at galaxies that die young has found some might shoot out this gas early on, causing them to redden and kick the bucket prematurely.

Astrophysicist Ivy Wong, from the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR), said there are two main types of galaxies; 'blue' galaxies that are still actively making new stars and 'red' galaxies that have stopped growing.

Most galaxies transition from blue to 'red and dead' slowly after two billion years or more, but some transition suddenly after less than a billion years--young in cosmic terms.

Dr Wong and her colleagues looked for the first time at four galaxies on the cusp of their star formation shutting down, each at a different stage in the transition.

The researchers found that the galaxies approaching the end of their star formation phase had expelled most of their gas.

Dr Wong said it was initially hard to get time on telescopes to do the research because other astronomers did not believe the dying galaxies would have any gas left to see.

The exciting result means the scientists will be able to use powerful telescopes to conduct a larger survey and discover the cause of this sudden shutdown in star formation.

Dr Wong said it is unclear why the gas was being expelled. "One possibility is that it could be blown out by the galaxy's supermassive black hole," she said.

"Another possibility is that the gas could be ripped out by a neighbouring galaxy, although the galaxies in the pilot project are all isolated and don't appear to have others nearby."

Swiss Federal Institute of Technology Professor Kevin Schawinski said the researchers predicted that the galaxies had to rapidly lose their gas to explain their fast deaths.

"We selected four galaxies right at the time where this gas ejection should be occurring," he said. "It was amazing to see that this is exactly what happens!"

The study appeared in the journal Monthly Notices of the Royal Astronomical Society, published by Oxford University Press.


Story Source:

The above story is based on materials provided by International Centre for Radio Astronomy Research. Note: Materials may be edited for content and length.


Journal Reference:

  1. O. I. Wong, K. Schawinski, G. I. G. Jozsa, C. M. Urry, C. J. Lintott, B. D. Simmons, S. Kaviraj, K. L. Masters. Misalignment between cold gas and stellar components in early-type galaxies. Monthly Notices of the Royal Astronomical Society, 2015; 447 (4): 3311 DOI: 10.1093/mnras/stu2724