sábado, 28 de fevereiro de 2015

Unlikely friendship

Snaps - uncommon friendship (1)

Snaps - uncommon friendship (4)

 

Snaps - uncommon friendship (7)

Snaps - uncommon friendship (8)

 

Snaps - uncommon friendship (5)

Snaps - uncommon friendship (9)

The Difference Between Research and Development

Tue, 02/24/2015 - 9:34am

Bradford L. Goldense, President, Goldense Group Inc., Needham, Mass.

 

Figure 1: The Continuum of Research and Development.

Figure 1: The Continuum of Research and Development. The already unclear lines separating research from development are getting even blurrier as more companies allocate some part of their R&D budget to take on riskier projects, and invest in the necessary infrastructure to manage these riskier activities. New products are now being launched out of recently formed "Innovation" organizations", and more are coming from existing “Advanced Development" organizations.

Challenges of "Anywhere" R&D
Several factors have complicated matters for industry observers trying to stay abreast of what might be coming to market by simply paying attention to product development pipelines. These factors include:

  • The changing corporate approaches described above.
  • The desires of developers to bring solutions to market, not just pieces of a solution.
  • The globalization of R&D that has, in effect, decentralized R&D.
  • Naming conventions for organizations that differ by industry and country.

The jury is still out as to whether today's approaches to R&D will prove more productive than historical approaches. Historical approaches to pre-product development generally restricted the scope of activities to reduce uncertainty and improve the predictability of key enabling features, capabilities and technologies—and then turned those enablers over to product development.

Traditional "Analog" R&D
R&D is a continuum—highly analog rather than digital in its construct. In the 20th century, R&D could generally be segmented into four categories: Basic Research, Applied Research, Advanced Development and Product Development. "Skunk Works," perhaps a fifth category, is a discussion for another day.

In Basic Research, discovery targets are broad. Scientists and researchers generally look for capabilities that have some efficacy with an articulated broad market or customer-based need. Some Basic Research is truly blue sky, but that has been on the decline the past few decades as few can afford it. Basic Research often just rules out things that won't work and identifies inventions that might work.

Applied Research generally has a more specific target. It usually focuses on a known problem, business opportunity, or an application area where economic or social improvements are possible. Applied Research generally starts off by taking enablers that might work and attempts to narrow them down to the possible and likely feasible solutions.

Advanced Development generally takes these possible and likely feasible solutions and further reduces the risk in hopes of culling out the best alternatives to deliver to an expressed target for a capability or feature to incorporate into products. Downstream manufactured cost considerations start to come into play as a culling consideration. Mentioned earlier, some advanced and innovation organizations actually bring the entire product to market.

Finally, Product Development invents what’s needed and necessary “now” and packages both the form and function of all feasible and risk-reduced features and/or capabilities into products planned for release to the marketplace. Venn diagrams of each of these four historical R&D categories always overlapped, but the overlaps have grown in cadence with the growth of innovation emphasis this past decade.

In short, research and advanced development organizations historically didn’t bring products with full form and function to market. Instead, they specialized in experimentation and analysis, combined with breadboards and brassboards to demonstrate feasibility. Form and function was the job of Product Development.

Professionals in these pre-product development organizations were largely senior scientists and engineers who specialized in discovery and in narrowing-down feasibility issues. The knowledge of manufacturing capabilities and constraints, packaging, logistics, costing and other "product launch" disciplines were almost exclusively in the realm of Product Development. Today, the lines of organization purpose and demarcation aren’t as clear.

"Analog vs. Anywhere" R&D
The idea that any organization can launch a product has advantages. They include:

  • Healthy competition between precut-releasing organizations.
  • Broadening the skill sets of all developers.
  • Keeping the entire development community close to the market and customer needs.
  • And perhaps adding flexibility to the management of development capacity.

On the other hand, there are also some disadvantages. The decline of specific subject matter experts may reduce the depth of company capabilities over the long term. When advanced or special innovation organizations get all the sweet projects, it sows dissatisfaction in the rest of the product development community. This is analogous to the way many feel when asked to exclusively focus on sustaining engineering or maintenance rather than new products. This can lead to unproductive time, often in the form of politicking. As well, some managers subtly alter or over represent their functional expertise to attain preferred projects. This can lead to duplication of staff, or projects over the heads of the people currently in the organization. The companies that are breaking from the analog continuum of research to development must remain vigilant to the possible downsides of change.

An "Analog vs. Anywhere" Example
Hard to internalize the historical R&D continuum?  Let's take the example of a self-piloted robotic lawn mower.  Lawn mowers are the realm of Product Development after decades of designing and producing them. Some advanced work might be necessary if one is trying to get better lift from the blade to push clippings into the collection bag. Some advanced work might be necessary for engines that have to operate regularly in extreme conditions, and/or with changing regulations on fuel chemistry. This past decade, significant advanced work has been necessary to develop all-wheel turning/steering for riding mowers. But, compared to those examples, a self-piloted lawn mower requires advanced work on just about every subsystem of the mower platform. 

Energy management, torque in a scaled down size, handling the mower, fail-safing while it is running and piloting are just a few of robomower's challenges. Piloting is a great example to make the point. Steering robomower isn’t as easy as the robotic vacuum cleaner that changes direction when it bumps into something.  The vegetable garden and flower bed might get half mowed before "robo mower" bumped into anything hard enough to redirect it; and that wouldn’t be good.  The same applies for pets and curious children to say the least. Advanced development organizations are still culling among feasible alternatives to define the edges of yards and set the redirection of the mower. Impact sensing, infrared or heat sensing, current or magnetic sensing aka doggie wire, GPS, directly program the yard coordinates as part of user set-up, photograph or image-driven, and several others are all  feasible alternatives. Can you imagine the project schedule if Product Development had to cull out the best piloting approach while designing the rest of the mower that is fully integrated with the piloting system?  Applied Research and Advanced Development organizations could work for years to determine the most accurate and economical piloting technology for a robotic mower platform across the myriad of yard configurations and landscaping approaches before anything was ready for wheels and a blade.

Conclusions
Today, robomower might be brought to market from either an Advanced or a Product Development organization versus Advanced Development determining the best piloting technique and Product Development then bringing robo mower to market.  As well, an entire robomower might be the responsibility of an Innovation Organization or Skunk Works.

Warfarin side effects: Watch for interactions

Although commonly used to treat blood clots, warfarin (Coumadin, Jantoven) can have dangerous side effects or interactions that can place you at risk of bleeding. Here are precautions to take to avoid warfarin side effects.

By Mayo Clinic Staff

If you've been prescribed warfarin (Coumadin, Jantoven) to prevent blood clots, you probably already know that this powerful drug can save your life if you are at risk of or have had blood clots. But you may not realize how serious warfarin side effects can be.

Warfarin, especially if taken incorrectly, increases your risk of dangerous bleeding. Warfarin side effects can also include interactions with some foods, prescription medicines and over-the-counter supplements.

If your doctor prescribes warfarin for you, make sure you understand all the potential warfarin side effects and interactions it could have.

When is warfarin prescribed?

You might be given warfarin if you have:

  • A blood clot in or near your heart that could trigger stroke, heart attack or organ damage
  • A blood clot in your lungs (pulmonary embolism)
  • A blood clot elsewhere in your body (venous thrombosis)
  • A high risk of blood clots forming in the heart, which can be a complication of some heart rhythm abnormalities (arrhythmias)
  • A mechanical artificial heart valve that is prone to forming blood clots
What warfarin side effects should you watch for?

When you take warfarin, your blood won't clot as easily. If you accidentally cut yourself while taking warfarin, you may bleed heavily. However, the risk of a major bleeding event is low.

You're more likely to have bleeding problems if you're older than 75 or take other blood-thinning medications that can further increase your bleeding risk.

You're also at higher risk of bleeding problems if you have:

  • High blood pressure (hypertension)
  • A history of stroke
  • Kidney problems
  • Cancer
  • Alcoholism
  • Liver disease

Some studies suggest that bleeding problems are more likely to occur during the first month of taking warfarin rather than later in treatment.

Warfarin side effects that require immediate medical attention
  • Severe bleeding, including heavier than normal menstrual bleeding
  • Red or brown urine
  • Black or bloody stool
  • Severe headache or stomach pain
  • Joint pain, discomfort or swelling, especially after an injury
  • Vomiting of blood or material that looks like coffee grounds
  • Bruising that develops without an injury you remember
  • Dizziness or weakness

Rarely, warfarin can cause the death of skin tissue (necrosis). This complication occurs most often three to eight days after you start taking warfarin. If you notice any sores, changes in skin color or temperature, or severe pain on your skin, seek immediate medical care.

Less serious warfarin side effects to tell your doctor about
  • Bleeding from the gums after you brush your teeth
  • Swelling or pain at an injection site
  • Heavier than normal menstrual bleeding or bleeding between menstrual periods
  • Diarrhea, vomiting or inability to eat for more than 24 hours
  • Fever
Jan. 06, 2015
References

See more In-depth

página 2

What precautions can you take against warfarin side effects?

To reduce your chance of developing warfarin side effects:

  • Tell your doctor about any other medications or supplements you take. Many medications and supplements can have a dangerous interaction with warfarin. If you receive a new prescription from someone other than your usual care provider, ask if you'll need additional blood tests to make sure the new medication isn't affecting your blood clotting.
  • Tell care providers you take warfarin before you have any medical or dental procedures. It's important to share this information even before minor procedures, such as vaccinations and routine dental cleanings. If you're going to have surgery, you may need to decrease or discontinue your warfarin dose at least five days before the procedure. Your doctor might prescribe a shorter acting blood thinner, heparin, while you're not taking warfarin.
  • Avoid situations that increase your risk of injury. Contact sports and other activities that could result in head injury should be avoided. Tell your doctor if you are unsteady while walking or have a history of falling.
  • Use safer hygiene and grooming products. A soft-bristle toothbrush, waxed dental floss and an electric razor for shaving can help prevent bleeding.
  • Consider wearing a bracelet or carrying a card that says you take warfarin. This identification can be useful if emergency medical providers need to know what medications you take.
  • Consider a warfarin sensitivity test. A significant number of people who take warfarin are at a higher risk of bleeding because their genes make them more sensitive to warfarin. If a family member experienced side effects from warfarin, talk to your doctor about taking a genetic warfarin sensitivity test. The test can determine if you have the genes that can increase your risk of bleeding.
What drugs and supplements can interact with warfarin?

Like any other medication, warfarin can interact with foods, other drugs, vitamins or herbal supplements. The interaction might lower the effectiveness of warfarin or increase your risk of bleeding. More than 120 drugs and foods that can interact with warfarin have been identified.

Drugs that can interact with warfarin include:

  • Many antibiotics
  • Antifungal medications, such as fluconazole (Diflucan)
  • Aspirin or aspirin-containing products
  • Ibuprofen (Advil, Motrin, others) or naproxen sodium (Aleve, Anaprox)
  • Acetaminophen (Tylenol, others) or acetaminophen-containing products
  • Cold or allergy medicines
  • Medications that treat abnormal heart rhythms, such as amiodarone (Cordarone, Pacerone)
  • Antacids or laxatives

Many other medications interact with warfarin. Be sure to tell any health care provider or pharmacist that you take warfarin.

Supplements that can interact with warfarin include:

  • Coenzyme Q10 (ubidecarenone)
  • Dong quai
  • Garlic
  • Ginkgo biloba
  • Ginseng
  • Green tea
  • St. John's wort
  • Vitamin E

Many other supplements can interact with warfarin. Be sure to tell your health care provider about any supplements you take.

What foods and drinks might interact with warfarin?

Foods and drinks that might interact with warfarin include:

  • Cranberries or cranberry juice
  • Alcohol
  • Foods that are high in vitamin K, such as soybean and canola oils, spinach and broccoli
  • Garlic
  • Black licorice

Your doctor might recommend keeping the level of vitamin K in your diet consistent rather than avoiding vitamin K-rich foods altogether.

What should you do if you forget a dose?

Never take a double dose of warfarin. Doing so could greatly increase your risk of side effects.

If you miss a dose, take it as soon as you remember. If you don't remember until the next day, call your doctor for instructions. If your doctor isn't available, skip the missed dose and start again the next day.

If you follow your doctor's dosing instructions and tell all your health care providers that you take warfarin, you'll be at a much lower risk of dangerous interactions and side effects. Talk to your doctor, nurse or pharmacist if you have any concerns about warfarin.

Computing for Better Radiation Therapy

February 19, 2015

Contact: Matthew Mille
(301) 975-5590

chamber and computer simulation

A photograph of the free-air chamber used for the physical measurements (left) and the corresponding model used in the computational simulations (right).

Doctors devising a plan of attack on a tumor may one day gain another tactical advantage thanks to a series of sophisticated calculations proposed by PML’s Dosimetry group. Their results could serve as a foundation for the next generation of software used by clinicians, to design courses of radiation treatment for electronic brachytherapy patients.

Electronic brachytherapy is a relatively new cancer treatment similar to conventional brachytherapy, in which radioactive seeds are placed in or near a patient to fight a tumor. In this case, however, the source of radiation is a miniature x-ray tube just a few millimeters in size. Since the tubes use a voltage instead of a radioactive source, they can be turned on and off as needed, allowing clinicians to minimize dose to patients and medical staff.

Currently, doctors use software to decide the best placement for an x-ray tube in a way that maximizes dose to the tumor and minimizes it elsewhere. This software makes several simplified assumptions about radiation dose to the body. For example, traditional brachytherapy dosimetry protocols do not account for the effects of iodine contrast or for the differences in how the radiation interacts with soft tissue and bone. For electronic brachytherapy, these issues can sometimes be significant, resulting in possible discrepancies between the planned dose and the dose that is actually delivered to a patient.

To remedy this situation, PML researchers will perform a series of simulations using state-of-the-art Monte Carlo methods, which rely on a collection of powerful computational algorithms and are often used to model complex physical and mathematical systems. The simulations will incorporate highly detailed generic models of the human body, allowing researchers to study the differences in the absorbed dose received by different organs and tissues.

“Monte Carlo dose calculations are currently considered the de facto gold-standard method for performing patient dosimetry,” says PML postdoctoral researcher Matthew Mille, who will be conducting the work. “However, these calculations are much too slow for routine clinical applications.” By performing the simulations themselves, the team hopes to provide researchers with the information they need to develop better software tools for physicians, which will allow for more accurate treatment plans for electronic brachytherapy patients.

Before making these calculations, however, Mille will use Monte Carlo methods to assess the performance of the detection system used to make top-level measurements of the radiation emitted by the x-ray tubes.*

For these measurements, the scientists set up an experiment where a tube emits x-rays that are collected by an ionization chamber, in which the interaction of the air and radiation creates a charge in a process known as ionization.

But some of the electrical signal picked up by the detector comes from processes distinct from the radiation emitted by the tube, while other aspects of the signal are lost due to natural processes.** Using Monte Carlo simulations, researchers can calculate how much unwanted ionization contributes to the measured signal.

They use the computed additions and losses as the basis for determining a series of so-called “correction factors,” which are then applied to the measurements. These correction factors help doctors ensure that the electronic brachytherapy sources give patients the intended dose.

Unlike previous efforts to determine the correction factors, this latest work will incorporate a more detailed model of the ionization chamber.*** The effort will serve as an independent verification of the older calculations, though the researchers say they anticipate that the new correction factors will be similar to those from earlier results.

-- Jennifer Lauren Lee

*Top-level measurements are performed using primary standard instruments, which are highly accurate and are used to calibrate other instruments.

**For example, some of the x-ray tube’s radiation signal goes missing when electrons escape the charge collection region of the free-air chamber before losing all of their energy. On the other hand, some of the electrical signal picked up by the detector comes from processes that shouldn’t be included in a measurement of the radiation coming from the x-ray source – for example, bremsstrahlung radiation, created when an electron decelerates.

***The previous set of correction factors was based on Monte Carlo calculations performed at the International Bureau of Weights and Measures (BIPM) and incorporated a simplified model of the ionization chamber as a box instead of a cylinder.

Simulating superconducting materials with ultracold atoms

Mon, 02/23/2015 - 11:46am

Jade Boyd, Rice Univ.

 

Rice Univ. physicists trapped ultracold atomic gas in grids of intersecting laser beams to mimic the antiferromagnetic order observed in the parent compounds of nearly all high-temperature superconductors. Image: P. Duarte/Rice Univ.

Rice Univ. physicists trapped ultracold atomic gas in grids of intersecting laser beams to mimic the antiferromagnetic order observed in the parent compounds of nearly all high-temperature superconductors. Image: P. Duarte/Rice Univ.Using ultracold atoms as a stand-in for electrons, a Rice Univ.-based team of physicists has simulated superconducting materials and made headway on a problem that's vexed physicists for nearly three decades.

The research was carried out by an international team of experimental and theoretical physicists and appears online in Nature. Team leader Randy Hulet, an experimental physicist at Rice, said the work could open up a new realm of unexplored science.

Nearly 30 years have passed since physicists discovered that electrons can flow freely through certain materials—superconductors—at relatively elevated temperatures. The reasons for this high-temperature, or "unconventional" superconductivity are still largely unknown. One of the most promising theories to explain unconventional superconductivity—called the Hubbard model—is simple to express mathematically but is impossible to solve with digital computers.

"The Hubbard model is a set of mathematical equations that could hold the key to explaining high-temperature superconductivity, but they are too complex to solve—even with the fastest supercomputer," said Hulet, Rice's Fayez Sarofim Professor of Physics and Astronomy. "That's where we come in."

Hulet's lab specializes in cooling atoms to such low temperatures that their behavior is dictated by the rules of quantum mechanics—the same rules that electrons follow when they flow through superconductors.

"Using our cold atoms as stand-ins for electrons and beams of laser light to mimic the crystal lattice in a real material, we were able to simulate the Hubbard model," Hulet said. "When we did that, we were able to produce antiferromagnetism in exactly the way the Hubbard model predicts. That's exciting because it's the first ultracold atomic system that's able to probe the Hubbard model in this way, and also because antiferromagnetism is known to exist in nearly all of the parent compounds of unconventional superconductors."

Hulet's team is one of many that are racing to use ultracold atomic systems to simulate the physics of high-temperature superconductors.

"Despite 30 years of effort, people have yet to develop a complete theory for high-temperature superconductivity," Hulet said. "Real electronic materials are extraordinarily complex, with impurities and lattice defects that are difficult to fully control. In fact, it has been so difficult to study the phenomenon in these materials that physicists still don't know the essential ingredients that are required to make an unconventional superconductor or how to make a material that superconducts at even greater temperature."

Hulet's system mimics the actual electronic material, but with no lattice defects or disorder.

"We believe that magnetism plays a role in this process, and we know that each electron in these materials correlates with every other, in a highly complex way," he said. "With our latest findings, we've confirmed that we can cool our system to the point where we can simulate short-range magnetic correlations between electrons just as they begin to develop.

"That's significant because our theoretical colleagues—there were five on this paper—were able to use a mathematical technique known as the Quantum Monte Carlo method to verify that our results match the Hubbard model," Hulet said. "It was a heroic effort, and they pushed their computer simulations as far as they could go. From here on out, as we get colder still, we'll be extending the boundaries of known physics."

Nandini Trivedi, professor of physics at Ohio State Univ., explained that she and her colleagues at the Univ. of California-Davis, who formed the theoretical side of the effort, had the task of identifying just how cold the atoms had to be in the experiment.

"Some of the big questions we ask are related to the new kinds of ways in which atoms get organized at low temperatures," she said. "Because going to such low temperatures is a challenge, theory helped determine the highest temperature at which we might expect the atoms to order themselves like those of an antiferromagnet."

After high-temperature superconductivity was discovered in the 1980s, some theoretical physicists proposed that the underlying physics could be explained with the Hubbard model, a set of equations invented in the early 1960s by physicist John Hubbard to describe the magnetic and conduction properties of electrons in transition metals and transition metal oxides.

Every electron has a "spin" that behaves as a tiny magnet. Scientists in the 1950s and 1960s noticed that the spins of electrons in transition metals and transition metal oxides could become aligned in ordered patterns. In creating his model, Hubbard sought to create the simplest possible system for explaining how the electrons in these materials responded to one another.

The Hubbard model features electrons that can hop between sites in an ordered grid, or lattice. Each site in the lattice represents an ion in the crystal lattice of a material, and the electrons' behavior is dictated by just a handful of variables. First, electrons are disallowed from sharing an energy level, due to a rule known as the Pauli Exclusion Principle. Second, electrons repel one another and must pay an energy penalty when they occupy the same site.

"The Hubbard model is remarkably simple to express mathematically," Hulet said. "But because of the complexity of the solutions, we cannot calculate its properties for anything but a very small number of electrons on the lattice. There is simply too much quantum entanglement among the system's degrees of freedom."

Correlated electron behaviors—like antiferromagnetism and superconductivity—result from feedback, as the action of every electron causes a cascade that affects all of its neighbors. Running the calculations becomes exponentially more time-consuming as the number of sites increases. To date, the best efforts to produce computer simulations of 2- and 3-D Hubbard models involve systems with no more than a few hundred sites.

Because of these computational difficulties, it has been impossible for physicists to determine whether the Hubbard model contains the essence of unconventional superconductivity. Studies have confirmed that the model's solutions show antiferromagnetism, but it is unknown whether they also exhibit superconductivity.

In the new study, Hulet and colleagues, including postdoctoral researcher Russell Hart and graduate student Pedro Duarte, created a new experimental technique to cool the atoms in their lab to sufficiently low temperatures to begin to observe antiferromagnetic order in an optical lattice with approximately 100,000 sites. This new technique results in temperatures on the lattice that are about half of that obtained in previous experiments.

"The standard technique is to create the cold atomic gas, load it into the lattice and take measurements," Hart said. "We developed the first method for evaporative cooling of atoms that had already been loaded in a lattice. That technique, which uses what we call a 'compensated optical lattice,' also helped control the density of the sample, which becomes critical for forming antiferromagnetic order."

Hulet said a second innovation was the team's use of the optical technique called Bragg scattering to observe the symmetry planes that are characteristic of antiferromagnetic order.

He said the team will need to develop an entirely new technique to measure the electron pair correlations that cause superconductivity. And they'll also need colder samples, about 10 times colder than those used in the current study.

"We have some things in mind," Hulet said. "I am confident we can achieve lower temperatures both by refining what we've already done and by developing new techniques. Our immediate goal is to get cold enough to get fully into the antiferromagnetic regime, and from there we'd hope to get into the d-wave pairing regime and confirm whether or not it exists in the Hubbard model."

Source: Rice Univ.

Subterranean roads proposed for greener London

 

Proposals have been unveiled that would see roads in London covered to make room for devel...

Proposals have been unveiled that would see roads in London covered to make room for development and green space above ground

Image Gallery (7 images)

Fresh from being the subject of an idea to move its pedestrians and cyclists below ground, London might now see the same happen to its motorists. New proposals would see a number of the city's roads buried. The aim is to create new space above ground and make the city greener and more pleasant.

The proposal was announced by Mayor of London Boris Johnson during a recent visit to Boston, US, where a similar project has taken place. The Central Artery-Tunnel Project (nicknamed the "Big Dig") saw Boston's existing six-lane elevated Central Artery replaced with an eight-lane underground highway.

According to the Massachusetts Department of Transportation, the Big Dig was the "largest, most complex, and technologically challenging highway project in the history of the United States." It is said to have significantly reduced congestion, improved the environment and led to regeneration above ground.

The Greater London Authority says that over 70 sites across London have been considered for the potential introduction of tunnels, fly-unders and decking, the renderings for some of which can be viewed in our gallery.

Among the suggestions, a fly-under has been proposed for the A4 in Hammersmith that would reconnect the town center with the River Thames. A mini tunnel has been proposed for the the A13 in Barking Riverside, meanwhile, on the basis that it could open up a significant amount of land for future development and reconnect parts of the local area.

The proposed layout of the A3 in Tolworth

Decking over a section of the A3 in Tolworth would create land for new homes and connect the area adjacent to the new Crossrail station with the rest of the borough. And the addition of decking or a mini tunnel to the A406 in New Southgate would also create space for new homes and connect the surrounding area to the new Crossrail station.

Elsewhere, there is discussion about a potential replacement for the London Inner Ring Road. Johnson believes that an inner orbital tunnel or two cross-city tunnels could help to deliver more efficient and reliable movement of vehicles around the city.

Transport for London will now work with local London boroughs to develop the proposal. Costs and funding options will be explored before the proposal is revisited in May of this year.

Sources: Greater London Authority, Massachusetts Department of Transportation

 

New Accelerometer Calibration Service to Start Soon

February 19, 2015

Contact: Richard Allen
(301) 975-5026

In a few weeks, NIST will begin offering a new, state-of-the-art calibration service for accelerometers. Based on a technique called laser interferometry, the system will be sensitive enough to detect motion over distances as small as a few nanometers (the size scale of a single molecule) and frequencies up to 50,000 cycles per second (50 kHz).

Accelerometers, which detect changes in motion, are perhaps most familiar as the sensors that tell your mobile phone screen to rotate or determine whether automotive airbags should deploy based on how abruptly the car’s speed changes. They are also the basis of handheld remote controllers for motion games sold by Nintendo and other companies.

But accelerometers have long been essential to the automotive and aerospace industries for identifying unwanted vibrations, to the military for inertial guidance* of submarines and weapons systems, to seismologists who study the motion of the earth, and to facilities such as nuclear power plants to provide early warning that a piece of equipment is about to fail.

“These days, they’re increasingly being used everywhere, not just in smartphones but also in wearable devices such as fitness trackers and watches” says Michael Gaitan, project leader in NIST’s Nanoscale Metrology Group. “They are what provide the information used by the apps that tell you how much you’re moving around, so you can count your calories.”

Potential uses abound. There is intense interest in combining inertial guidance with GPS for position locations in places where GPS signals cannot reach, such as inside buildings. “They have to be accurate enough that you can use them for a reasonable period of time when a GPS signal is not available,” Gaitan says.

In addition, researchers studying traumatic brain injury need increasingly accurate ways to measure shock, and engineers are examining better ways to embed vibration sensors in bridges and buildings to warn of possibly unsafe conditions.

The utility of these and other applications depends critically on the accuracy of the accelerometers. Calibrating them reveals how well they can measure the distance something moves (the “throw”) over a specific amount of time, how fast it can respond to that change, and whether its readings vary over time, among other considerations.

In NIST’s facility, as in many others, measurements are made on a shaker table – a small platform that is vibrated at specific frequencies over specific distances. For a typical accelerometer about the size of a lipstick tube, a simple test run might take about 10 minutes to step the device from about 5 cycles per second (5 Hz) with a throw of about 8 millimeters to 25 kHz (the facility’s initial upper limit for calibration measurements) at which frequency the plate moves only a few percent of the width of a human hair.

The table’s motion is continuously monitored – without physical contact – by a downward-pointing laser beam from a device called a vibrometer, which is mounted on a 200-pound base plate above the shaker. The system very accurately monitors the shaker table’s vibration frequency and displacement by detecting the way that laser-light waves, when reflected off the table surface, reinforce or cancel each other back at the vibrometer’s sensors. The data are then compared to the output readings from the accelerometer under test over a span of approximately 1 m/s2 to 100 m/s2.

To measure the performance of accelerometers over much lower frequencies, which require longer throw distances, the NIST facility also contains two other instruments—mounted on a cubic meter of granite – that allow for motion over as much as 50 mm.

Although project researchers sometimes work collaboratively with manufacturers, the principal customers for the service will be calibration laboratories, many of which have contracts that call for NIST-traceable measurements, as well as large facilities such as nuclear plants that require periodic verification of sensors in safety systems.

When planning how to upgrade their calibration capabilities to meet the latest demands, the project scientists decided against building a full custom apparatus. Instead, they chose to obtain the kind of commercially available, high-tech system used at the world’s top metrology institutes.

NIST’s new primary calibration system is custom made to stringent NIST-required specifications. At present, the researchers are finishing a complete suite of tests on the system, and will soon publish the uncertainty budget – an analysis listing all possible sources of uncertainty in different aspects of the measurements, and the relative magnitudes of each. So far, they have determined that the system performance exceeds the manufacturer’s claimed specifications. This finding, subject to further testing, may enable NIST’s calibration service to lower its uncertainties even further as in the future.

Although acceleration calibrations have been conducted at NIST and the world’s other national measurement institutes since the 1960s, a new wave of microsensor technologies is driving new consumer portable technologies and are poised to be a prime application of the internet of things,”** Gaitan says.

“These applications are driving high-volume sensor manufacturing, resulting in a $15 billion world market with a 13% compound annual growth rate. It is projected to grow to $22 billion by 2018. The testing and calibration requirements of these sensors are becoming more complex and demanding as more functions, such as gyroscope and compasses, are integrated together with the accelerometers. NIST is working together with the sensor manufacturing industry to advance measurement technologies and standards to meet these needs.”

Note: Any mention or image of commercial products within NIST web pages is for information only; it does not imply recommendation or endorsement by NIST.

* Inertial guidance uses motion sensors to determine the location of an object by tracking its path from a well-defined starting point.

** The “internet of things” refers to the rapidly expanding number of applications in which embedded computing devices communicate with one another over the Internet.

accelerometer calibration equipment

The accelerometer under test (hexagonal object) is attached to the 'shaker table' disk at the center of the apparatus. The red dot on the table surface is produced by the laser beam from the vibrometer above (not shown).

source www.nist.gov

Minhas músicas preferidas /6/ She

Snap 2015-02-28 at 15.56.52

India has a rare opportunity to become the world’s most dynamic big economy

Feb 21st 2015

Emerging markets used to be a beacon of hope in the world economy, but now they are more often a source of gloom. China’s economy is slowing. Brazil is mired in stagflation. Russia is in recession, battered by Western sanctions and the slump in the oil price; South Africa is plagued by inefficiency and corruption. Amid the disappointment one big emerging market stands out: India.

If India could only take wing it would become the global economy’s high-flyer—but to do so it must shed the legacy of counter-productive policy. That task falls to Arun Jaitley, the finance minister, who on February 28th will present the first full budget of a government elected with a mandate to slash red tape and boost growth. In July 1991 a landmark budget opened the economy to trade, foreign capital and competition. India today needs something equally momentous.

Strap on the engines

India possesses untold promise. Its people are entrepreneurial and roughly half of the 1.25 billion population is under 25 years old. It is poor, so has lots of scope for catch-up growth: GDP per person (at purchasing-power parity) was $5,500 in 2013, compared with $11,900 in China and $15,000 in Brazil. The economy has been balkanised by local taxes levied at state borders, but cross-party support for a national goods-and-services tax could create a true common market. The potential is there; the question has always been whether it can be unleashed.

Optimists point out that GDP grew by 7.5% year on year in the fourth quarter of 2014, outpacing even China. But a single number that plenty think fishy is the least of the reasons to get excited. Far more important is that the economy seems to be on an increasingly stable footing (see article). Inflation has fallen by half after floating above 10% for years. The current-account deficit has shrunk; the rupee is firm; the stockmarket has boomed; and the slump in commodity prices is a blessing for a country that imports four-fifths of its oil. When the IMF cut its forecasts for the world economy, it largely spared India.

The real reason for hope is the prospect of more reforms. Last May Narendra Modi’s Bharatiya Janata Party won a huge election victory on a promise of a better-run economy. His government spent its early months putting a rocket up a sluggish civil service and on other useful groundwork. But the true test of its reformist credentials will be Mr Jaitley’s budget.

The easy part will be to lock in India’s good fortune, with fiscal and monetary discipline. In addition India’s public-sector banks need capital and, since the state cannot put up the money, the minister must persuade potential shareholders that they will be run at arm’s length from politicians.

If India is to thrive, it needs bold reforms and political courage to match. The tried-and-tested development strategy is to move people from penurious farm jobs to more productive work with better pay. China’s rise was built on export-led manufacturing. The scope to follow that model is limited. Supply-chain trade growth has slowed, and manufacturing is becoming less labour-intensive as a result of technology. Yet India could manage better than it does now. It has a world-class IT-services industry, which remains too skill-intensive and too small to absorb the 90m-115m often ill-educated youngsters entering the job market in the next decade. The country’s best hope is a mixed approach, expanding its participation in global markets in both industry and services. To achieve this Mr Jaitley must focus on three inputs: land, power and labour.

Jumbo on the runway

All are politically sensitive and none more so than land purchases. In China the state would just requisition the land, and let farmers go hang. But India has veered too far the other way. A long-standing plan to build a second international airport in Mumbai is on ice. An act passed in the dying months of the previous government made things worse by calling for rich compensation to landowners, a social-impact study for biggish projects and the approval of at least 70% of landholders before a purchase can go ahead. Mr Modi has used his executive powers to do away with the consent clause for vital investments. It is a temporary fix; Mr Modi needs to make it permanent and to win that political battle he needs to show that prime locations do not go to cronies, but to projects that create jobs.

Power, or rather the shortage of it, also stops India soaring. According to one survey half of all manufacturers suffered power cuts lasting five hours each week. Inefficiency is rampant throughout the power network, stretching from Coal India, a state monopoly, to electricity distributors. The first auctions of coal-mining licences to power, steel and cement companies, which began this week, are a step forward. More effort will be needed to open distribution to competition. Regulators are cowed by politicians into capping electricity prices below the cost of supply—though people will pay up and leave the politicians alone if they know that the supply is reliable.

The third big area ripe for reform is India’s baffling array of state and national labour laws. Compliance is a nightmare. Many of the laws date to the 1940s: one provides for the type and number of spittoons in a factory. Another says an enterprise with more than 100 workers needs government permission to scale back or close. Many Indian businesses stay small in order to remain beyond the reach of the laws. Big firms use temporary workers to avoid them. Less than 15% of Indian workers have legal job security. Mr Jaitley can sidestep the difficult politics of curbing privileges by establishing a new, simpler labour contract that gives basic protection to workers but makes lay-offs less costly to firms. It would apply only to new hires; the small proportion of existing workers with gold-star protections would keep them.

Adversity has in the past been the spur to radical change in India. The 1991 budget was in response to a balance-of-payments crisis. The danger is that, with inflation falling and India enjoying a boost from cheaper energy, the country’s leaders duck the tough reforms needed for lasting success. That would be a huge mistake. Mr Modi and Mr Jaitley have a rare chance to turbocharge an Indian take-off. They must not waste it.

source. www.economist.com

Warming up the world of superconductors

Thu, 02/26/2015 - 8:50am

Robert Perkins, Univ. of Southern California

A superconductor that works at room temperature was long thought impossible, but scientists at the Univ. of Southern California (USC) may have discovered a family of materials that could make it reality.

A team led by Vitaly Kresin, professor of physics at USC, found that aluminum "superatoms"—homogenous clusters of atoms—appear to form Cooper pairs of electrons (one of the key elements of superconductivity) at temperatures around 100 K.

Though 100 K is still pretty chilly—that's about -280 F—this is an enormous increase compared to bulk aluminum metal, which turns superconductive only near 1 K (-457 F).

"This may be the discovery of a new family of superconductors, and raises the possibility that other types of superatoms will be capable of superconductivity at even warmer temperatures," said Kresin, corresponding author of a paper on the finding that was published by Nano Letters. USC graduate student Avik Halder and former USC postdoctoral researcher Anthony Liang are co-authors.

The future of electronics and energy transmission

Superconductivity is the ability to transmit electricity without any resistance, meaning that no energy is lost in the transmission.

The reason your laptop heats up when you leave it on for a long time is that electricity meets resistance as it courses through the machine's circuits, generating heat—wasted energy.

Beyond the specific applications that superconductors are already used for—MRI machines, powerful electromagnets that levitate maglev trains, particle accelerators and ultrasensitive magnetic field sensors, to name a few—a room-temperature superconductor would allow engineers to make all electronic devices ultraefficient.

Cooper pairs: Electron dance partners

First predicted in 1956 by Leon Cooper, Cooper pairs are two electrons that attract one another in some materials under certain conditions, such as extreme low temperatures.

"Imagine you have a ballroom full of paired-up dancers, only the partners are scattered randomly throughout the room. Your partner might be over by the punch bowl, while you're in the center of the dance floor. But your motions are done in tandem—you are in step with one another," Kresin said. "Now imagine everyone changes dance partners every few moments. This is a commonly used analogy for how Cooper pairing works."

When electrons flow through a material, they bump into various imperfections that knock them off course. That's the resistance that causes energy loss in the form of heat.

If the electrons are mated up into Cooper pairs, however, that connection is just strong enough to keep them on course regardless of what they bump into. Cooper pairs are what make superconductivity work.

Superconductivity in superatoms

Superatoms actually behave in some ways like a giant atom. Electrons flow inside them in a predictable shell structure, as if in a single atom's electron cloud.

Electron shells are the result of a quantum effect—a physical property described by the special laws of quantum mechanics. The shells are the orbits of increasing size at which electrons can be found around an atom. They occur in a predictable fashion: Two electrons zip around the nucleus in the closest orbit, eight in the next highest orbit, 18 in the third and so on.

The fact that superatoms are not just solid particles but also possess a giant set of electron shells made scientists suspect that they might also exhibit another quantum effect: Cooper pairing.

To test that hypothesis, Kresin and his team painstakingly built aluminum superatoms of specific sizes (from 32 to 95 atoms large) and then zapped them with a laser at various temperatures. They recorded how many electrons they were able to knock off of the superatom as they dialed up the energy level of the laser.

The subsequent plot on a graph should have been a simple upward curve—as the energy of the laser increases, more electrons should be knocked off in a smoothly proportional manner.

For superatoms containing 37, 44, 66 and 68 aluminum atoms, the graph instead showed odd bulges indicating that at certain energy levels, the electrons were resisting the laser's effort to knock them away from the group—possibly because Cooper pairing was helping the electrons to cling to each other.

The bulge appears as temperature decreases—with the threshold for its appearance occurring somewhere around 100 K, giving evidence that the electrons were forming Cooper pairs.

The future of superconductors

Superatoms that form Cooper pairs represent an entirely new frontier in the field of superconductivity. Scientists can explore the superconductivity of various sizes of superatoms and various elements to make them.

"One-hundred Kelvin might not be the upper-temperature barrier," Kresin said. "It might just be the beginning."

Kresin envisions a future in which electronic circuits could be built by placing superatoms in a chain along a substrate material, allowing electricity to flow unhindered along the chain.

Source: Univ. of Southern California

Kids Imagination Furniture lets children build their own cardboard desk and chair

 

Once completed, Kids Imagination Furniture can be used as a normal desk and chair

Once completed, Kids Imagination Furniture can be used as a normal desk and chair

Image Gallery (9 images)

Getting children involved in practical projects at a young age can aid their development and foster their creative streak ... which is what Kids Imagination Furniture from The Cardboard Guys hopes to achieve. The set comprises a cardboard desk and chair which children as young as five can build for themselves.

The Cardboard Guys are three graduates of Cal Poly, San Luis Obispo. Their range of cardboard furniture was born from an ambition to reduce the amount of waste created on an annual basis. Corrugated cardboard was chosen as the perfect material, thanks to both its strength and ability to be recycled. The bonus with Kids Imagination Furniture is that cardboard can also easily be decorated, allowing kids to let their creative juices run free, and for parents to relax about furniture getting damaged.

Kids Imagination Furniture comprises a desk that's 27 inches (69 cm) across, 23 inches (58 cm) high, and 19 inches (48 cm) deep, and a chair that's 23 inches (58 cm) high, 12 inches (30 cm) deep, and has a seat 13 inches (33 cm) high and 10.5 inches (27 cm) deep. The cover of the desk is reversible, allowing for at least two different designs, and it also features plenty of cubbies for storing art supplies and the like.

The furniture is designed so that the kids themselves can put it together following flat-p...

The furniture will be delivered flat-packed in a few simple pieces that snap and lock together using The Cardboard Guys' proprietary system. The structure of the furniture means it can hold up to 500 pounds (227 kg) in weight. It's all manufactured from between 40 and 60 percent recycled fiber, and the furniture can itself be recycled when the kids grow out of it.

The Cardboard Guys are funding Kids Imagination Furniture through a Kickstarter campaign, seeking to raise US$25,000. At the time of writing a pledge of $75 or more will get you one desk and one chair, providing the creators deliver on their promises. Shipping is set to begin in June 2015. The video below shows the creators talking a little about the furniture, as well as a demonstration of the manufacturing process.

Sources: The Cardboard Guys, Kickstarter

 

Scientist claims that human head transplants could be a reality by 2017

 

A woman's severed head awaits a new body, in the 1962 film The Brain That Wouldn't Die

A woman's severed head awaits a new body, in the 1962 film The Brain That Wouldn't Die

Transplanting a human head onto a donor body may sound like the stuff of science fiction comics, but not to Italian doctor Sergio Canavero. He has not only published a paper describing the operation in detail, but also believes that the surgery could be a reality as early as 2017.

Canavero, Director of the Turin Advanced Neuromodulation Group, initially highlighted the idea in 2013, stating his belief that the technology to successfully join two severed spinal cords existed. Since then he's worked out the details, describing the operation in his recent paper, as the Gemini spinal cord fusion protocol (GEMINI GCF).

To carry out the transplant, a state of hypothermia is first induced in both the head to be transplanted and the donor body, to help the cells stay alive without oxygen. Surgeons would then cut into the neck tissue of both bodies and connect the blood vessels with tubes. The next step is to cut the spinal cords as neatly as possible with minimal trauma.

The severed head would then be placed on the donor body and the two spinal cords encouraged to fuse together with a sealant called polyethylene glycol, which Canavero notes in his paper, has "the power to literally fuse together severed axons or seal injured leaky neurons."

After suturing the blood vessels and the skin, the patient is kept in a comatose state for three to four weeks to discourage movement and give both spinal stumps time to fuse. The fusion point will also be electrically stimulated to encourage neural connections and accelerate the growth of a functional neural bridge. The patient will additionally be put on a regime of anti-rejection medications.

According to Canavero, with rehabilitation the patient should be able to speak in their own voice and walk within a year's time. The goal is to help people who are paralyzed, or whose bodies are otherwise riddled with degenerative diseases and other complications. While the procedure sounds extremely complex and disturbing on multiple levels, Canvero tells us he's already conducting interviews with volunteers who've stepped forward.

"Many are dystrophic," Canavero tells Gizmag. "These people are in horrible pain."

The most well-known example of a head transplant was when Dr. Robert White, a neurosurgeon, transplanted the head of one rhesus monkey onto another in 1970. The spinal cords, however, were not connected to each other, leaving the monkey unable to control its body. It subsequently died after the donor body rejected the head.

Current technology and recent advances hold out more promise. Canavero plans to garner support for the project, when he presents it at the American Academy of neurological and Orthopaedic Surgeons conference in Annapolis, Maryland, later this year. Understandably his proposal has generated incredible controversy, with experts questioning the specifics and ethics of the procedure, even going as far as calling it bad science.

Canavero isn't fazed at all, though.

"These experts failed for 35 years to cure paralysis," Canavero tells us. "Pretty sure the bad science is theirs."

The study describing GEMINI GCF was published in the journal Surgical Neurology International, linked below.

Source: Surgical Neurology International via The Telegraph

 

 

High-performance flow battery could rival lithium-ions for EVs and grid storage

 

PNNL's high performance zinc-polyiodide flow battery approaches the performance of some li...

PNNL's high performance zinc-polyiodide flow battery approaches the performance of some lithium-ion batteries (Image: PNNL)

A new redox flow battery designed at the Pacific Northwest National Laboratory (PNNL) more than doubles the amount of energy that this type of cell can pack in a given volume, approaching the numbers of lithium-ion batteries. If the device reaches mass production, it could find use in fast-charging transportation, portable electronics and grid storage.

A flow battery is formed by two liquids with opposite charge (electrolytes) which turn chemical energy into electricity by exchanging ions through a membrane. The electrolytes are stored in two external tanks and this makes the system easy to scale up, potentially very quick to charge (the electrolytes can simply be replaced) and resistant to extreme temperatures. These perks have already inspired some radical concept car designs but if these dreams are going to come to fruition, flow batteries will need to get over one big hump: currently, the best flow cell out there only packs less than a third of the energy per unit volume as a lithium-ion battery.

Because of this, flow cells are mainly used where space is not at a premium, such as to store large amounts of energy from renewable sources in open spaces. Still, even in this arena, a more energy-dense flow cell could turn out to be very useful, improving the reliability of the electric grid in a tight urban setting, and perhaps even challenging the upcoming lithium-ion home batteries announced by Tesla.

PNNL researchers led by Wei Wang have now developed a prototype, high-performance zinc-polyiodide flow battery with a high energy density of 167 Wh/l (watt-hours per liter), a number that could almost double to 322 Wh/l with further optimizations.

This is a significant step up from the state-of-the-art 70 Wh/l zinc-bromide flow batteries and means that, 40 years after being invented, the performance of these cells could finally catch up with that of the now ubiquitous lithium-ion batteries. For comparison, lithium iron phosphate batteries, a type of lithium-ion battery used in portable electronics and some small EVs, put out about 233 Wh per liter.

Aside from being able to store more energy in a smaller space, the zinc-polyiodide battery is also reportedly safer than other flow batteries through the absence of acidic electrolytes, it’s non-flammable, and it can operate at ranges of -4 to 122 °F (-20 to 50 °C), meaning it doesn’t require extensive cooling circuitry that take up extra weight and room (like in Tesla’s Model S battery pack).

Before the battery can see mass production, the researchers will need to address the issue of a zinc build-up that grew from the negative electrode and permeated through the membrane, reducing the battery’s efficiency. Wei and team contained the problem by adding alcohol to the electrolyte, and are now experimenting with scaling up the battery size in order to conduct further testing.

The advance is described in an open-access Nature Communications paper.

Source: PNNL

 

sexta-feira, 27 de fevereiro de 2015

Drug research, development more efficient than expected..

February 27, 2015

University of Basel

Despite ever increasing regulation in drug approval and the rising costs of research, drug research and development remains unexpectedly efficient, a new shows. To investigate the efficiency in the development of new drugs, the researchers analyzed a data set consisting of new drugs approved by the FDA. They looked at efficiency indicators that could potentially positively influence the approval of new drugs.


Drug R&D costs have increased substantially in recent decades, while the number of new drugs has remained fairly constant, leading to concerns about the sustainability of drug R&D and question about the factors that could be responsible.

To investigate the efficiency in the development of new drugs, the researchers analyzed a data set consisting of new drugs approved by the FDA. They looked at efficiency indicators that could potentially positively influence the approval of new drugs.

Lower costs, faster approval

The study lead by Prof. Thomas D. Szucs analyzed 257 new drugs that were approved by the FDA from 2003 to 2013. To assess the so-called innovation efficiency, the researchers analyzed specific parameters and factors. The study shows: Although there remains some potential for efficiency enhancement, several parameters have developed positively in the past decade.

The researchers also discovered that new drugs get approved earlier and with less use of resources, when they entered the approval process assigned to special categories or programs. Affiliation to these categories alone reduces for example the probability of having to conduct expensive pivotal trials -- a clinical trial designed to test the effectiveness of a drug against a placebo control group.

In conclusion the results show that market access of new drugs is not inefficient. Important is that "both industry and authorities work together in further developing drug approval in order to provide innovation to patients in a timely manner," says Szucs.


Story Source:

The above story is based on materials provided by University of Basel. Note: Materials may be edited for content and length.


Journal Reference:

  1. Rossella Belleli, Roland Fisch, Thomas D. Szucs. Regulatory watch: Efficiency indicators for new drugs approved by the FDA from 2003 to 2013. Nature Reviews Drug Discovery, 2015; 14 (3): 156 DOI: 10.1038/nrd4563