quinta-feira, 11 de junho de 2015

Cutting carbon emissions could have indirect effects on hunger

 

 

VS - A (61)

Thu, 06/11/2015 - 9:05am

American Chemical Society

As many of the world’s nations prepare and implement plans to cut greenhouse gas emissions, researchers say another critical factor needs to be considered. A new study has found for the first time that efforts to keep global temperatures in check will likely lead to more people going hungry. That risk, they say in Environmental Science & Technology, doesn’t negate the need for mitigation but highlights the importance of comprehensive policies.

Previous studies have shown that climate change reduces how much food farms can produce, which could lead to more people suffering from hunger. Curbing the greenhouse gas emissions that lead to climate change can help maintain the yields of existing crops. But there might be indirect ways in which cutting emissions could actually put more people at risk of going hungry. For example, some grasses and other vegetation used for biofuels require agricultural land that might otherwise be used for food production. So, increased biofuel consumption could negatively affect the food supply. Also, the high cost of low-emissions technologies such as carbon capture and storage will be borne by consumers, who will then have less money to spend on food. Tomoko Hasegawa and colleagues wanted to get a better idea of how these pieces fit together.

The researchers used multiple models to determine the effects of strict emissions cuts and found that many more people would be at risk of hunger than if those cuts weren’t in place. The team concludes that governments will have to take measures, such as increasing food aid, as they address climate change.

Source: American Chemical Society

Longstanding problem put to rest

 

 

Thu, 06/11/2015 - 9:51am

Larry Hardesty, MIT News Office

Image: iStock (edited by MIT News)

Image: iStock (edited by MIT News)

Comparing the genomes of different species—or different members of the same species—is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common—the “edit distance” between them—is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.

At the ACM Symposium on Theory of Computing (STOC), Massachusetts Institute of Technology (MIT) researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes—or texts, or speech samples, or anything else that can be represented as a string of symbols—can’t be solved more efficiently.

In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.

“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that—without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”

Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.

Squaring off
Edit distance is the minimum number of edits—deletions, insertions and substitutions—required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.

That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to 2N, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.

Can’t get no satisfaction
The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?

In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob and Cindy go into one, but Walt, Yvonne and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.

At this point, the problem of reconciling the solutions for the two subgroups—factoring in constraints like Alice’s restraining order—becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.

Source: Massachusetts Institute of Technology

Synthetic immune organ produces antibodies

 

 

Thu, 06/11/2015 - 9:32am

Anne Ju, Cornell University.

When exposed to a foreign agent, such as an immunogenic protein, B cells in lymphoid organs undergo germinal center reactions. The image on the left is an immunized mouse spleen with activated B cells (brown) that produce antibodies. At right, top: a scanning electron micrograph of porous synthetic immune organs that enable rapid proliferation and activation of B cells into antibody-producing cells. At right, bottom: primary B cell viability and distribution is visible 24 hrs following encapsulation procedure. Images: Singh lab

When exposed to a foreign agent, such as an immunogenic protein, B cells in lymphoid organs undergo germinal center reactions. The image on the left is an immunized mouse spleen with activated B cells (brown) that produce antibodies. At right, top: a scanning electron micrograph of porous synthetic immune organs that enable rapid proliferation and activation of B cells into antibody-producing cells. At right, bottom: primary B cell viability and distribution is visible 24 hrs following encapsulation procedure. Images: Singh labCornell Univ. engineers have created a functional, synthetic immune organ that produces antibodies and can be controlled in the lab, completely separate from a living organism. The engineered organ has implications for everything from rapid production of immune therapies to new frontiers in cancer or infectious disease research.

The immune organoid was created in the lab of Ankur Singh, assistant professor of mechanical and aerospace engineering, who applies engineering principles to the study and manipulation of the human immune system. The work was published online in Biomaterials and will appear later in print.

The synthetic organ is bio-inspired by secondary immune organs like the lymph node or spleen. It is made from gelatin-based biomaterials reinforced with nanoparticles and seeded with cells, and it mimics the anatomical microenvironment of lymphoid tissue. Like a real organ, the organoid converts B cells—which make antibodies that respond to infectious invaders—into germinal centers, which are clusters of B cells that activate, mature and mutate their antibody genes when the body is under attack. Germinal centers are a sign of infection and are not present in healthy immune organs.

The engineers have demonstrated how they can control this immune response in the organ and tune how quickly the B cells proliferate, get activated and change their antibody types. According to their paper, their 3-D organ outperforms existing 2-D cultures and can produce activated B cells up to 100 times faster.

The immune organ, made of a hydrogel, is a soft, nanocomposite biomaterial. The engineers reinforced the material with silicate nanoparticles to keep the structure from melting at the physiologically relevant temperature of 98.6 degrees.

The organ could lead to increased understanding of B cell functions, an area of study that typically relies on animal models to observe how the cells develop and mature.

What’s more, Singh said, the organ could be used to study specific infections and how the body produces antibodies to fight those infections—from Ebola to HIV.

“You can use our system to force the production of immunotherapeutics at much faster rates,” he said. Such a system also could be used to test toxic chemicals and environmental factors that contribute to infections or organ malfunctions.

The process of B cells becoming germinal centers is not well understood, and in fact, when the body makes mistakes in the genetic rearrangement related to this process, blood cancer can result.

“In the long run, we anticipate that the ability to drive immune reaction ex vivo at controllable rates grants us the ability to reproduce immunological events with tunable parameters for better mechanistic understanding of B cell development and generation of B cell tumors, as well as screening and translation of new classes of drugs,” Singh said.

Source: Cornell University

The 17 Ingeniously Designed Products You Need In Your Life

 

 

These products are awesome simply because they were designed that way. All them are really amazing, and they will help you get through your day.

ingeniously-designed-products-you-need-in-your-life-1

ingeniously-designed-products-you-need-in-your-life-3

ingeniously-designed-products-you-need-in-your-life-4

ingeniously-designed-products-you-need-in-your-life-5

 

ingeniously-designed-products-you-need-in-your-life-7

ingeniously-designed-products-you-need-in-your-life-8

ingeniously-designed-products-you-need-in-your-life-9

ingeniously-designed-products-you-need-in-your-life-10

ingeniously-designed-products-you-need-in-your-life-11

ingeniously-designed-products-you-need-in-your-life-12

ingeniously-designed-products-you-need-in-your-life-13

ingeniously-designed-products-you-need-in-your-life-14

ingeniously-designed-products-you-need-in-your-life-16

ingeniously-designed-products-you-need-in-your-life-17

ingeniously-designed-products-you-need-in-your-life-18

ingeniously-designed-products-you-need-in-your-life-19

 
http://whattodowhenyour-bored.com

The Knot You Should Not Be Without

 

The Knot You Should Not Be Without

Unlike other bulky supplemental power banks, BOLD Knot is a stylish, pocket-sized charger and a power bank for any smartphone. It uses a fashionable, knot-shaped design that’s unrecognizable as a peripheral charger. Tiny battery pack… big power! Get up to 3 hours of extra talk time, 2.5 hours of internet, or 8 hours of music.

Designer: BOLD Gadgets

source: www.yankodesign.com

 

At near absolute zero, molecules may start to exhibit exotic states of matter

 

MIT researchers have successfully cooled a gas of sodium potassium (NaK) molecules to a temperature of 500 nanokelvin. In this artist's illustration, the NaK molecule is represented with frozen spheres of ice merged together: the smaller sphere on the left represents a sodium atom, and the larger sphere on the right is a potassium atom.

Credit: Illustration: Jose-Luis Olivares/MIT

The air around us is a chaotic superhighway of molecules whizzing through space and constantly colliding with each other at speeds of hundreds of miles per hour. Such erratic molecular behavior is normal at ambient temperatures.

But scientists have long suspected that if temperatures were to plunge to near absolute zero, molecules would come to a screeching halt, ceasing their individual chaotic motion and behaving as one collective body. This more orderly molecular behavior would begin to form very strange, exotic states of matter -- states that have never been observed in the physical world.

Now experimental physicists at MIT have successfully cooled molecules in a gas of sodium potassium (NaK) to a temperature of 500 nanokelvins -- just a hair above absolute zero, and over a million times colder than interstellar space. The researchers found that the ultracold molecules were relatively long-lived and stable, resisting reactive collisions with other molecules. The molecules also exhibited very strong dipole moments -- strong imbalances in electric charge within molecules that mediate magnet-like forces between molecules over large distances.

Martin Zwierlein, professor of physics at MIT and a principal investigator in MIT's Research Laboratory of Electronics, says that while molecules are normally full of energy, vibrating and rotating and moving through space at a frenetic pace, the group's ultracold molecules have been effectively stilled -- cooled to average speeds of centimeters per second and prepared in their absolute lowest vibrational and rotational states.

"We are very close to the temperature at which quantum mechanics plays a big role in the motion of molecules," Zwierlein says. "So these molecules would no longer run around like billiard balls, but move as quantum mechanical matter waves. And with ultracold molecules, you can get a huge variety of different states of matter, like superfluid crystals, which are crystalline, yet feel no friction, which is totally bizarre. This has not been observed so far, but predicted. We might not be far from seeing these effects, so we're all excited."

Zwierlein, along with graduate student Jee Woo Park and postdoc Sebastian Will -- all of whom are members of the MIT-Harvard Center of Ultracold Atoms -- have published their results in the journal Physical Review Letters.

Sucking away 7,500 kelvins

Every molecule is composed of individual atoms that are bonded together to form a molecular structure. The simplest molecule, resembling a dumbbell, is made up of two atoms connected by electromagnetic forces. Zwierlein's group sought to create ultracold molecules of sodium potassium, each consisting of a single sodium and potassium atom.

However, due to their many degrees of freedom -- translation, vibration, and rotation -- cooling molecules directly is very difficult. Atoms, with their much simpler structure, are much easier to chill. As a first step, the MIT team used lasers and evaporative cooling to cool clouds of individual sodium and potassium atoms to near absolute zero. They then essentially glued the atoms together to form ultracold molecules, applying a magnetic field to prompt the atoms to bond -- a mechanism known as a "Feshbach resonance," named after the late MIT physicist Herman Feshbach.

"It's like tuning your radio to be in resonance with some station," Zwierlein says. "These atoms start to vibrate happily together, and form a bound molecule."

The resulting bond is relatively weak, creating what Zwierlein calls a "fluffy" molecule that still vibrates quite a bit, as each atom is bonded over a long, tenuous connection. To bring the atoms closer together to create a stronger, more stable molecule, the team employed a technique first reported in 2008 by groups from the University of Colorado, for potassium rubidium (KRb) molecules, and the University of Innsbruck, for non-polar cesium­ (Ce) molecules.

For this technique, the newly created NaK molecules were exposed to a pair of lasers, the large frequency difference of which exactly matched the energy difference between the molecule's initial, highly vibrating state, and its lowest possible vibrational state. Through absorption of the low-energy laser, and emission into the high-energy laser beam, the molecules lost all their available vibrational energy.

With this method, the MIT group was able to bring the molecules down to their lowest vibrational and rotational states -- a huge drop in energy.

"In terms of temperature, we sucked away 7,500 kelvins, just like that," Zwierlein says.

Chemically stable

In their earlier work, the Colorado group observed a significant drawback of their ultracold potassium rubidium molecules: They were chemically reactive, and essentially came apart when they collided with other molecules. That group subsequently confined the molecules in crystals of light to inhibit such chemical reactions.

Zwierlein's group chose to create ultracold molecules of sodium potassium, as this molecule is chemically stable and naturally resilient against reactive molecular collisions.

"When two potassium rubidium molecules collide, it is more energetically favorable for the two potassium atoms and the two rubidium atoms to pair up," Zwierlein says. "It turns out with our molecule, sodium potassium, this reaction is not favored energetically. It just doesn't happen."

In their experiments, Park, Will, and Zwierlein observed that their molecular gas was indeed stable, with a relatively long lifetime, lasting about 2.5 seconds.

"In the case where molecules are chemically reactive, one simply doesn't have time to study them in bulk samples: They decay away before they can be cooled further to observe interesting states," Zwierlein says. "In our case, we hope our lifetime is long enough to see these novel states of matter."

By first cooling atoms to ultralow temperatures and only then forming molecules, the group succeeded in creating an ultracold gas of molecules, measuring one thousand times colder than what can be achieved by direct cooling techniques.

To begin to see exotic states of matter, Zwierlein says molecules will have to be cooled still a bit further, to all but freeze them in place. "Now we're at 500 nanokelvins, which is already fantastic, we love it. A factor of 10 colder or so, and the music starts playing."

This research was supported in part by the National Science Foundation, the Air Force Office of Scientific Research, the Army Research Office, and the David and Lucile Packard Foundation.

 

Microsoft Surface Hub launches in September.

 

 

Businesses will be able to get their hands on the Surface Hub later this year

Businesses will be able to get their hands on the Surface Hub later this year

Image Gallery (3 images)

Microsoft has released pricing and availability information for its enterprise focused Surface Hub, announced back in January of this year. The hardware is pitched as an all-in-one solution for collaboration in meetings, consisting of a large display, active stylus, and a whole host of sensors and connectivity options.

Running on Microsoft's upcoming Windows 10 operating system, the Surface Hub is essentially a huge, wall-mounted all-in-one PC. Users will interact with the large touch panel via OneNote whiteboard, making use of low-latency, active stylus tech that's designed to make it feel like you're putting pen to paper. Certain apps like Skype for Business are built in, but it will also run universal Windows apps.

It looks to be an attractive and versatile piece of business hardware, with some useful flairs thrown in. For example, it'll jump right into the whiteboard app when the pen is picked up, and Skype meetings can be initiated with just a single tap. The hardware is pretty full-featured, with stereo speakers up front, a mic, a pair of wide-angle 1080p cameras, along with infrared and ambient light sensors.

There are two sizes of the Surface Hub on offer– a 55-inch 1,920 x 1,080 variant, and a 84-inch, 3,840 x 2,140 option. The smaller panel runs on a 4th generation Intel Core i5 chip with integrated HD 4600 graphics, while its big brother runs on a Core i7 with a Nvidia Quadro K2200 GPU.

So the big news today? We actually known when these monsters will start making their way into meeting rooms. Microsoft will start taking Surface Hub orders on July 1, with units expected to ship to 24 markets (including the US) in September. There's a big difference in pricing between the two models, with the 55-inch Surface Hub coming in at US$6,999, and the 84-inch version hitting a much higher $19,999 price point.

 

Source: Microsoft

Retrofittable electric engine adds power and safety to light aircraft

 

 

Additional electric engine adds safety and power to light aircraft

Additional electric engine adds safety and power to light aircraft (Credit: Universidad Carlos III de Madrid)

Image Gallery (5 images)

Small, single-engine aircraft are the mainstay of recreational flying, and provide many hours of generally safe enjoyment for hundreds of thousands of enthusiasts worldwide. However, with only one engine on-board, they are also often only a small malfunction away from becoming a heavy, unpowered glider in dire need of somewhere to land. To help improve this situation, researchers at Universidad Carlos III de Madrid (UC3M) and AXTER Aerospace have created an auxiliary electric propulsion unit designed to be installed in conventionally-powered light aircraft to both increase available power and provide extra range in the event of an engine failure.

Primarily aimed at improving the safety of light passenger aircraft with masses of up to 750 kg (1,650 lb), the retrofittable electric propulsion system has been created in direct response to a perceived need in the light aircraft space.

"We are trying to saves lives and prevent accidents related to loss of power during flights, when the engine fails or the fuel runs out," says Miguel Ángel Suárez, from AXTER Aerospace. "We mustn’t forget that every year in Europe and USA there are an average of 600 accidents, 70 deaths and 24 million euros (US$27 million) in losses recorded."

The new arrangement sees an electric engine coupled to the conventional engine via the conventional drive system. There's also a high-efficiency lithium battery charged by the plane’s conventional engine, and an automatic electronic control system that automatically adjusts the electric drive motor to the needs of the plane.

"If there is a problem with the main engine, this electric engine will start to function, which will provide an additional range of about 20 kilometers, enough for the pilot to land safely," said Andrés Barrado, head of the UC3M Electric Power Systems group.

An extra 20 km (12 miles) may not seem a lot, but given that most light aircraft fly in a pattern not too far from their originating airfield, it could make the difference between returning to the safety of the airport or crashing in a field.

Serendipitously, the emergency propulsion system can also add around 40 extra horsepower (30 kW), as needed and when selected by the pilot. Not quite in the realm of a super-powered electric unit like the Siemens 260 kW (340 hp) monster, perhaps, but a handy addition to the lowly-powered engines of many light planes nonetheless.

"We maximize the capacity of the battery in generating movement with the electric engine, and we have found that we can also use the system as a hybrid for light aircraft: the pilot can activate it when she wants, adding up to 40 horsepower for take-offs or whatever is needed," said Daniel Cristobal, from AXTER Aerospace.

Currently being promoted and patented around the world, the creators claim that their system can be installed in all manner of light aircraft, either as a retrofit or in the construction of new aircraft. Claimed to reduce operating and maintenance bills, whilst lowering fuel consumption, the makers also assert that it may one day also be available for other types of craft, including gyroplanes, drones and UAVs.

 

Source: Universidad Carlos III de Madrid, AXTER Aerospace

Smart windows can be tuned for privacy, while still letting the light shine through

 

 

Imagine if you could turn up the opacity for a bit of privacy, just as you would with regular blinds, without compromising on brightness?

Imagine if you could turn up the opacity for a bit of privacy, just as you would with regular blinds, without compromising on brightness? (Credit: Timothy Zarki/University of Cincinnati)

Image Gallery (9 images)

The glass panels that let light into our homes and offices have been seen as huge windows of opportunity for engineers in recent times. If the amount of light pouring through can be managed throughout the day, it could lessen reliance on energy-sapping air conditioner units, for instance. This has led to a number of examples of smart facades that keep interior spaces from overheating, and some that even harvest energy for lights and ventilation. But a new tunable window-tinting technology is claimed to do things the smart glass before it cannot, by allowing users control over brightness, color temperature and opacity.

Scientists from the University of Cincinnati (UC) teamed up with researchers from Hewlett Packard, Merck and National Taiwan University in developing the tunable windows. The team set out to produce an adaptation of the technology found in e-paper electronic displays that could scale to cover entire window panes without being prohibitively expensive.

Around three years in the making, the final, patent-pending design is claimed to be simple and cheap to manufacture. Two glass substrates enclose a layer of polymer, a set of electrodes, and micro-replicated polymer nubs. This results in a honeycomb-pattern coating that can be integrated into new windows or even rolled onto existing ones.

"Basically, one color has one charge," explains UC's Sayantika Mukherjee, who led the research. Another color has another charge, and we apply voltage to repel or attract the colors into different positions. The basic technology is not that different from what our group has previously demonstrated before in electronic display devices."

So, imagine if you could turn up the opacity for a bit of privacy, just as you would with regular blinds, without compromising on brightness? Or adjust the windows to bring feelings of warmth to your cold, bare basement, like you could with one of Phillips' Hue lightbulbs? The researchers say the tunable windows can make this possible. They report that when the windows are turned milky for privacy, they are capable of still letting through 90 percent of natural light.

Furthermore, the team says the system can allow for other types of configurations. This might involve controlling both visible light and infrared heat transmission at the same time. Such a function could find use in summer months to prevent the house heating up, or in the winter months to help keep it warm and toasty.

The researchers are hopeful that the solution will lead to inexpensive window tinting that replaces conventional shades and blinds.

The research was published in the journal Applied Optics.

 

Source: University of Cincinnati

New algorithm paves the way for light-based computers

 

 

Optical interconnects made of silicon act as a prism to direct infrared light transferring data between computer chips

Optical interconnects made of silicon act as a prism to direct infrared light transferring data between computer chips

An inverse design algorithm developed by Stanford engineers enables the design of silicon interconnects capable of transmitting data between computer chips via light. The new process replaces the wire circuitry used to relay information electronically, which could lead to the development of highly efficient, light-based computers.

While the heavy lifting in computer processing takes place inside the chips, an analysis by Stanford professor of electrical engineering, David Miller, showed that up to 80 percent of a microprocessor’s power is eaten up by the transmitting of data as a stream of electrons over wire interconnects. Basically, shipping requires far more energy than production, and chewing through all that power is the reason laptops heat up.

Inspired by the optical technology of the internet, the researchers sought to move data between chips over fiber optic threads beaming photons of light. Besides using far less energy than traditional wire interconnects, chip-scale optic interconnects can carry more than 20 times more data.

The majority of fiber optics are made from silicon, which is transparent to infrared light the same way glass is to visible light. Thus, using optical interconnects made from silicon was an obvious choice. “Silicon works,” said Tom Abate, Stanford Engineering communications director. “The whole industry knows how to work with silicon.”

But optical interconnects need to be designed one at a time, making the switch to the technology impractical for computers since such a system requires thousands of such links. That’s where the inverse design algorithm comes in.

The software provides the engineers with details on how the silicon structures need to be designed for performing tasks specific to their optical circuitry. The group designed a working optical circuit in the lab, copies were made, and all worked flawlessly despite being constructed on less than ideal equipment. The researchers cite this as proof of the commercial viability of their optical circuitry, since typical commercial fabrication plants use highly precise, state-of-the-art manufacturing equipment.

While details of the algorithm’s functions is a tad complex, it basically works by designing silicon structures that are able to bend infrared light in various and useful ways, much like a prism bends visible light into a rainbow. When light is beamed at the silicon link, two wavelengths, or colors, of light split off at right angles in a T shape. Each silicon thread is miniscule – 20 could sit side-by-side within a human hair.

The optical interconnects can be constructed to direct specific frequencies of infrared light to specific locations. And it’s the algorithm that instructs how to create these silicon prisms with just the right amount and bend of infrared light. Once the calculation is made as to the proper shape for each specific task, a tiny barcode pattern is etched onto a slice of silicon.

Building an actual computer that uses the optical interconnects has yet to be realized, but the algorithm is a first big step. Other potential uses for the algorithm include designing compact microscopy systems, ultra-secure quantum communications, and high bandwidth optical communications.

The team describes their work in the journal Nature Photonics.

Source: Stanford University

Tools for real-time visual collaboration

 

National Science Foundation (NSF) Discoveries


Indiana and Purdue University professors design cyberlearning system to make sharing ideas easier

Concept sketches of a toy helicopter in skWiki

Concept sketches of a toy helicopter in skWiki, which supports multiple histories for an object.
Credit and Larger Version

June 10, 2015

Increasingly, professionals in all sorts of fields need to work together to rapidly design new and complex solutions to pressing problems. Often, these teams are geographically or temporally distributed.
"We've had great tools in the past to collaborate on texts, but we haven't had great tools for us to collaborate visually," said Kylie Peppler, an assistant professor of learning sciences at Indiana University. "As a result, much of today's innovative design work still occurs through pen-and-paper sketching, which is challenging to share and systematically build upon."

On Thursday, June 11, Peppler takes part in the Capitol Hill Maker Faire, which immediately precedes the June 12-18 Week of Making announced by the White House. Peppler will be on a panel at 2:00 p.m. titled "Making in Community."

In an earlier presentation at the National Science Foundation (NSF), Peppler and Karthik Ramani, a professor of mechanical engineering at Purdue University, described two new interactive collaboration tools they've designed--skWiki (or Sketch Wiki, pronounced 'squeaky') and Juxtapoze--that significantly improve creative collaboration in the visual design process.

For the last 18 years, Ramani has taught a class on toy design at Purdue. The university purchased its first 3-D printers in 1995 and has consistently been on the technological cutting edge. But digital collaboration among students had remained a challenge.

"Computers have been individualistic devices," said Ramani. "But a new class of technology is emerging that allows us to engage in our individual learning process and also in a collaborative process."

SkWiki is an example of such technology. It allows many participants to collaboratively create digital multimedia projects on the web using many kinds of media types, including text, hand-drawn sketches and photographs.

Whereas collaborative writing tools like Wikipedia and Google Docs don't allow for divergent thinking, SkWiki does, the researchers say. Individuals can contribute elements to others' designs or clone components from your peers' work and integrate them into your own.

Ramani and Peppler released the tool to the public in March 2014, and presented the project at the 2014 Association for Computing Machinery (ACM) conference on Human Factors in Computing Systems last June.

A second tool they developed, Juxtapoze, lets students draw shapes that the program matches to existing web images, leading to serendipitous discoveries and open-ended creativity.

"Juxtapoze lowers the barriers to entry for people to make creative things in a very fluid manner," said Peppler.

A plug in for skWiki--the first of many, according to the researchers--Juxtapoze quickly matches scribbles generated by the designer with images found on the Internet, leading to serendipitous connections: for instance, a circle may become a wheel, or a squiggly line, a snake.

With Juxtapoze, "technology starts to be a partner in the design process and gets students to think in more divergent ways," she said.

Ramani and Peppler believe the two technologies have the potential to transform the design process into a more collaborative, creative and efficient endeavor.

"We've created a platform that can change things in a strategic way to maximize learning," said Ramani. "This new class of tools allows us to have better fluidity between our ideas and our design and lets students accomplish complicated engineering concepts in a short time scale."

Beyond enabling collaboration, SkWiki and Juxtapoze have the capacity to generate useful data about the creative design process itself, a potentially rich source for research into the impact of technology and collaboration on learning processes.

"Process is more important than final product, but it's often not treated that way," Peppler said. "These tools capture and privilege the process."

 
Aaron Dubrow, NSF (703) 292-4489 adubrow@nsf.gov

Investigators
Kylie Peppler
Karthik Ramani

Related Institutions/Organizations
Purdue University
Indiana University

Related Programs
Cyber-Human Systems
Cyberlearning and Future Learning Technologies

Related Awards
#1249229 EAGER: skWiki - A Sketch-based Wiki
#0535156 3D Sketch-Based System for Conceptual Design
#1227639 DIP: V-ICED Visually-Integrated Cyber Exploratorium for Design
#1312167 AIR Option 1: Technology Translation Gesture-based free form shape modeling

Years Research Conducted
2009 - 2014

Total Grants
$1,479,490

Study shows how the US could achieve 100 percent renewable energy by 2050

 

 

A study points the way to a renewable energy reliant United States in just 35 years

A study points the way to a renewable energy reliant United States in just 35 years (Credit: Shutterstock)

A team of researchers led by Stanford University's professor Mark Z. Jacobson has produced an ambitious roadmap for converting the energy infrastructure of the US to run entirely on renewable energy in just 35 years. The study focuses on the wide-scale implementation of existing technologies such as wind, solar and geothermal solutions, claiming that the transition is both economically and technically possible within the given timeframe.

As a starting point, the researchers looked at current energy demands on a state-by-state basis, before calculating how those demands are likely to evolve over the next three and a half decades. Splitting the energy use into residential, commercial, industrial and transportation categories, the team then calculated fuel demands if current generation methods – oil, gas, coal, nuclear and renewables – were replaced with electricity.

That already sounds like a mammoth task, but its true complexity comes to light when you consider that for the purposes of the study, absolutely everything has to run on electricity. That means everything from homes and factories to every vehicle on the road.

As it turns out, while the calculations might be complex, the results are extremely promising.

"When we did this across all 50 states, we saw a 39 percent reduction in total end-use power demand by the year 2050," said Jacobson. "About 6 percentage points of that is gained through efficiency improvements to infrastructure, but the bulk is the result of replacing current sources and uses of combustion energy with electricity."

In order for each state to make the transition, it would focus on the use of the most easily available renewable sources. For example, some states get a lot more sunlight than others, some have a greater number of south-facing rooftops, while coastal states can make use of offshore wind farms, and for others geothermal energy is a good option.

Interestingly, the plan doesn't involve the construction of new hydroelectric dams, but does call for improved efficiency of existing facilities. It would also only require a maximum of 0.5 percent of any one state's land to be covered in wind turbines or solar panels.

The team looked at all of the above before laying out a roadmap for each state to become 80 percent reliant on clean, renewable energy by 2030, with a full transition achieved by 2050.

Some states are more prepared to make the change than others. For example, Washington state already draws some 70 percent of its current electricity from hydroelectric sources, and both Iowa and South Dakota use wind power for around 30 percent of their electricity needs.

So what would all of this cost? Well, according to the research, the initial bill would be fairly hefty, but thanks to the sunlight and wind being free, things would level out in the long run, roughly equaling the cost of the current infrastructure.

"When you account for the health and climate costs – as well as the rising price of fossil fuels – wind, water and solar are half the cost of conventional systems," said Jacobson. "A conversion of this scale would also create jobs, stabilize fuel prices, reduce pollution-related health problems and eliminate emissions from the United States. There is very little downside to a conversion, at least based on this science."

Not only would it be economically viable to make the switch, but it would also have some significant knock-on health benefits, as approximately 63,000 people currently die from air pollution-related cases in the US every year.

The researchers published the results of their study in the journal Energy and Environmental Sciences. There's also an interactive map available, detailing how each state would make use of available renewables.

Source: Stanford University

The Ultimate Guide To Solar Panels

 

 

Clouds are not a problem

Clouds are not a problem

GREY

Contrary to common thought, solar panels work in cloudy weather too. Although it is true that the output will be maximized when the amount of sunlight hitting the panels is greater, solar panels have proved to be significantly efficient in grey weather such as that of the UK.

Efficiency

Efficiency

Efficiency is one of the most looked-at features. A solar panel in simple terms consists of a group of solar cells that are grouped next to each other on top of a backsheet and covered by glass. Solar cell efficiency refers to the amount of light that a single cell can convert into electricity and solar panel efficiency is the amount of light that one entire module can convert into electricity. Solar panels currently on the market have an efficiency between 15%- 22%. You might think this is a low percentage, but the truth is, it is enough to power your home and sell some of the remaining electricity back to the grid.

Grid-connected systems

Grid-connected systems

When you are choosing a solar panels system, you will see there are lots of options. One important thing to know is that there are grid-connected and off-the-grid solar systems. The first option is good if you want to benefit from the Feed-in Tariff and also if you want to have access to electricity during the night. Off-the-grid systems are more suitable for cottages that have no access to the national grid, and most often go along with batteries to power the house when there is no sun.

Heating or electricity?

Heating or electricity?

SOLARTHERMALSYS

Despite the fact that “solar panels” is the most common term to refer to solar power, one must know that solar panels (or solar photovoltaics) are different from solar water heating systems (or solar thermal systems). While solar panels generate electricity and make it available for household use, solar thermal systems are used to heat water, which is then available for heating purposes in the house.

Inverter

Inverter

A solar inverter is also called “the brain“ of the solar system. This device will convert the energy captured during the day, from DC to AC, so it can be used for consumption. The inverter combines digital control technology with efficient power conversion architecture in order to make the most of the energy the panels are collecting. You can choose from grid-tie or off-grid inverters. ”Smart inverters” are being developed with forecasting algorithms, load/demand planning and so on. In addition, smart inverters can be “told” whether to use photovoltaic energy or not, send power to batteries to store it, or save you in case of a power blackout.

Ja, Panasonic, Samsung, REC, Sunpower

Ja, Panasonic, Samsung, REC, Sunpower

Are just a few brand names that manufacture solar panels. The major players in the market are competing to bring great features for the consumers such as double sided panels or anti reflective coating. A large amount of resources is invested every year in Research and Development. It is good to have an idea of the different brands in the market, and maybe read a little description of the differences and advantages of each. You can find a lot of detailed information here: Buyers-guide-to-solar-panels

Kilowatts

Kw

KILOWATT

You can find it on your monthly electricity bill. They will indicate the amount of electricity you have used. It is always a good starting point to find your energy bills from the last year so you and the solar supplier have a good understanding of your energy use and costs. From this step you can start planning what kind of panels and how many you need. Check with the manufacturer to find out how many kW you can expect the panels to generate.

Long term savings

Long term savings

Solar panels are said to be expensive. And it’s true that the initial costs are considerable (although technological development has lead to an abrupt drop in the costs of solar devices). However, like it happens with any other big investment, one should assess the costs/benefits in the long term. The cost of a solar panel installation can go from £2,500 to £8,000 depending on the type of system chosen. If you are eligible for the Feed-in Tariff, this will payback after approximately 8 years (remember that during all this time you will enjoy the free electricity).

Monocrystalline solar cells

Monocrystalline solar cells

MONOCRYSTA

Made from the purest form of silicone, these cells deliver the highest efficiencies. One crystal of silicon is used to make one cell. You can recognize them by the uncovered gaps at the four corners of the cells. apart from monocrystalline cells, you can also choose polycrystalline or thin-film panels.

Normal Operating Test Conditions

Normal Operating Test Conditions

(NOCT) are used to measure the solar panel's output under “real world” conditions. This will indicate how much power you can expect the panels to generate on any given day. Initially, the panels are tested under Standard Test Conditions (STC). This is a lab carried test, where the conditions are optimal. However, a solar system will produce lower energy when faced with changes in: weather, shading, irradiance. NOCT assumes the following: irradiance 800 W/㎡, ambient temperature 20°C, windspeed 1m/s, nominal cell temperature 45.7°C (±2°C). So when you read the data sheet of a solar panel, look for the output under NOCT.

Photovoltaic

Photovoltaic

SOLARPANELS

"Photo" comes from the Greek language, and means "light"; and "voltaic", means electric, and comes from the name of the Italian physicist Volta. Often, solar panels are also called photovoltaics or solar PV’s.

Rent a roof scheme

Rent a roof scheme. (or better yet: free solar panels)

Some installers run this business model which consists of installing a free solar panel system on your roof. In this scenario, you rent your roof in exchange of free (and clean) electricity generated by the panel. The catch is that the excess electricity the panels produce will be sold to the grid and the money will be cashed in by the installer. If you decide to get the free panels you should know that you and the installer will enter a 25-year commitment contract. Also, you need to own your home and your roof must usually be 24 square meters due south, or within 35 degrees of south.

Technical information

Technical information

The solar cell is the main component of Photovoltaic technology and Solar PV systems use these cells to convert solar radiation into electricity. Solar cells consist of one or two layers of a semi-conductor and the most common material used in these cells is silicon, an abundant element most commonly found in sand. Watch how solar panels are made. Solar cells can be wired together to form a module (a solar panel) and these can then be connected together to form an array. More than one array connected together will form a solar system. In addition, a solar panel can weigh from 15Kg to 26Kg, and their dimensions go from 99x140cm to 105x166cm. Moreover, you should know that maximum output (expressed in Watts) is the amount of power a sun panel will generate.

Unlimited energy

Unlimited energy

The big advantage of solar energy compared with other sources of energy is that the energy of the sun is unlimited. In addition, solar energy is not only a clean and free source of energy, but also a different way to make money. Installing solar panels in your roof will therefore reduce your carbon footprint and also reduce your electricity bills since you will use less energy from your supplier. In addition, if you are eligible for the Feed-in Tariff, you can even get paid for the green electricity you produce. Here you can see a simple explanation of the carbon footprint.

V is for very easy!

V is for very easy!

VERYEASY

Switching to solar energy is very easy once you find the right installer. Finding the right installer is not complicated either. As previously mentioned, you can get free quotes and compare the offers from different installers. You can also ask for references if you have neighbours that have already taken the lead. When you have the installer, it’s just about making the investment and enjoying for the next decades the free and green electricity from your panels.

Warranty of solar panels

Warranty of solar panels

Typically, most manufacturers offer 2 kinds of warranty:

  • one for the product
  • one for the output

It is common to get a 10-year material and workmanship warranty. When it comes to output, most manufacturers will cover 25 years. You should check with the manufacturer how much of the output they will actually cover. For example, you could expect 97% of the maximum output in the first year of operation. This will gradually decrease with 0.7% every year reaching around 80% in year 25.

X-ample

X-ample

Xavier has a 4kW Solar PV installation at his bungalow, in Bournemouth. The 16 solar panel system generates 3500 kWh per year in green energy. After the first month, he started to save £50 on his electricity bill. After 3 months, the first cheque came in with payments from the Feed-in Tariff- this time £200. Xavier now covers 80% of the electricity he consumes with the help of solar panels. The payback period for this project is 6.5 years but payments from the FIT will come for the next 20 years.

You should get solar panels!

You should get solar panels!

HOUSE

There’s not much to explain here. It is easy, it is good for the environment, and it is also good from a financial point of view.