segunda-feira, 26 de maio de 2014

Call Us Scientific Home Journal

 

By Ricki Rusting | August 28, 2013

The views expressed are those of the author and are not necessarily those of Scientific American.

 


Notes and Queries logo

Credit: Scientific American

I love Scientific American‘s archive not only for its record of scientific discovery but also for the surprises I invariably find there. Who knew that beyond covering cutting-edge research, Scientific American of the 1800s offered household hints and even recipes?

A department called “Notes and Queries” offered tips and answers to readers’ questions. Among the useful hints: tin roofs need to be kept well painted; “Ladies may render their gauzy dresses somewhat incombustible by mixing a little pulverized alum in the starch when they are ‘done up’”; artesian well water is healthy for bathing.

One of my favorite finds, though, is “One Hundred Choice Household Receipts,” published February 22, 1879 in Supplement No. 164. The article presents ”practical and economical” recipes, several “published for the first time.” The recipes run the gamut from bread to side dishes to condiments and sauces, but the lion’s share give directions for making sweets.  Naturally, I had to try one.

For modern-day cooks, the recipes pose multiple challenges. They leave out the preferred size and shape of baking pans and the length of time to keep concoction in the oven. They also sometimes measure sugar and milk and such in terms of “teacups,” and mete out butter “the size of an egg” –terms apparently not uncommon at the time.

Directions for “Sally Lunn,” for instance, read:

“One quart of flour, 3 tablespoons yeast, 3 eggs, 1 saltspoon salt, butter the size of an egg. Make up with new milk into a tolerably stiff batter; set it to rise, and when light pour into a mould, and set to rise again as light bread. Bake quickly.”

And for “Plum Pudding”:

“One teacup molasses, one cup of sweet milk, one teaspoon soda, one tablespoon butter, one pint raisins, chopped; flour enough to make as thick as soft gingerbread, one teaspoon of all kinds of spices. Sauce.–One cup powdered sugar, half cup butter, two eggs well beaten; just before served, one tumbler [1/2 pint] boiling currant wine.”

A quick look at Google reveals conflicting measures for a teacup—but it seems to refer to about four or six ounces. Butter-the-size-of-an-egg equals ¼ cup, some sources say. A saltspoon is ¼ teaspoon.

Being a fan of both coffee and cake, I tried my hand at “Coffee Cake”:

“One cup sugar, one cup molasses, one cup butter, one cup strong coffee, one teaspoon cinnamon, a grated nutmeg, one cup seeded raisins, two small teaspoons soda; stir in flour until the mixture will drop from the spoon. This receipt will make two cakes.”

I hit some bumps pretty quickly. I didn’t know how much nutmeg was being called for exactly (I guessed two or three teaspoons). Other unknowns: whether to use sweet or salted butter and what it meant to say “the mixture will drop from the spoon.” The goop seemed to “drop from the spoon” even before I added any flour. Also, what kind of pan was I to use and how long was I to bake the cake and at what temperature?

Meanwhile, I did not want to use a pound of butter, so I halved the recipe. And my tin of nutmeg was probably as old as the article, so I substituted fresher “cake spice”–containing cinnamon, anise, nutmeg, allspice, ginger and cloves–in place of the cinnamon and nutmeg (1½ teaspoons). I also chose salted butter over sweet and mixed in 1½ cups of flour. I stirred the batter until it resembled whipped cake frosting. I poured it into a greased 8- by 8-inch square pan and baked it at 350 degrees for 45 minutes.

cake sunken in the middle

The cake sank in the middle. Credit: Ricki Rusting

The square pan was a bad idea; the cake sunk in the middle. A tube pan would probably have been better.

The cake tasted good to me, but my husband declared it too strongly spiced. The top had a pleasant sweet crunch to it, but the rest of the cake, outside of the sunken part, was dry and crumbly; I must have used too much flour, or maybe I baked the cake for too long, or both. A tester came out clean at 30 minutes, but I was hoping that longer baking would induce the center to rise. It didn’t.

I would love to hear from anyone else who tries the recipe. What works, what doesn’t? Please comment.

Perhaps the most delightful recipe I’ve come across is for lobster salad; the instructions, published in the August 23, 1862 issue, take the form of a poem (below). The editors credit the Rev. Mr. Barham, author of “The Ingolds by Legends,” for the recipe, which they claim works better without the onion and with a little bit of sugar. They also suggest starting with a lobster weighing two pounds in its shell and a large head of lettuce.

Lobster Salad

colored photo of lobster

Credit: Wikimedia Commons

“Two large potatoes, passed through kitchen sieve,
Unwonted softness to the salad give;
Of ardent mustard add a single spoon,
Distrust the condiment which bites too soon;
But deem it not, though man of herbs, a fault
To add a double quantity of salt;
Three times the spoon with oil of Lucca crown;
And once with vinegar procured from town;
True flavor needs it, and your poet begs
The powdered yellow of two well-boiled eggs;
Let onion atoms lurk within the bowl,
And scarce suspected, animate the whole;
And lastly, on the flavored compound toss
A magic teaspoon of anchovy sauce;
Then, though green turtle fail, though venisons tough,
And ham and turkey are not boiled enough,
Serenly full the epicure may say,
‘Fate cannot harm me, I have dined to-day.’”

Is All the Universe from Nothing?

 

By Richard Yonck | May 22, 2014 |  

The views expressed are those of the author and are not necessarily those of Scientific American.

 


In March, a team of researchers based in Antarctica announced they’d detected gravitational waves, faint echoes from the first moments of the Big Bang. This discovery has enormous implications for cosmology, the world of physics and even our understanding of the future of our universe. My recent blog post about the BICEP2 project explored some of these, as does my upcoming article about cosmic inflation in the July-August issue of The Futurist.

The expansion of the universe. (Source: NASA)

These writings gave me a lot to think about regarding the origins of our universe. Invariably, when explaining the early evolution of the cosmos, one particular question always comes up: where did the singularity that started the Big Bang come from? For some time, many physicists and cosmologists have said it could be possible for our universe to have actually started from nothing – as wild and counterintuitive as that sounds. But without proof this seems like a statement of faith, impossible to prove or disprove and therefore outside the purview of true scientific discussion. Ever since Popper, we’ve said that falsifiability is the demarcation between what is scientific and what is not. It felt like this might be the point where the scientific method would have to give way to the origin stories of myth.

Or perhaps not.

Last month saw the publication of a paper that may be as important to our understanding of the Big Bang as was the detection of gravitational waves. A team from the Wuhan Institute of Physics and Mathematics in China has made the first rigorous mathematical proof that the Big Bang could have spontaneously generated from nothing. The Wuhan team, led by Qing-yu Cai, developed new solutions to the Wheeler-DeWitt equation, which sought to combine quantum mechanics and general relativity in the mid-20th century.

A map of cosmic microwave background radiation. (Source: NASA)

According to Heisenberg’s uncertainty principle, quantum fluctuations in the metastable false vacuum – a state absent of space, time or matter – can give rise to virtual particle pairs. Ordinarily these pairs self-annihilate almost instantly, but if these virtual particles separate immediately, they can avoid annihilation, creating a true vacuum bubble. The Wuhan team’s equations show that such a bubble has the potential to expand exponentially, causing a new universe to appear. All of this begins from quantum behavior and leads to the creation of a tremendous amount of matter and energy during the inflation stage. (Note that as stated in this paper, the metastable false vacuum has “neither matter nor space or time,” but is a form of wavefunction referred to as “quantum potential.” While most of us wouldn’t be inclined to call this “nothing,” physicists do refer to it as such.)

This description of exponential growth of a true vacuum bubble corresponds directly to the period of cosmic inflation resulting from the Big Bang. According to this proof, the bubble even stops expanding – or else it may continue to expand at a constant velocity – once it reaches a certain size. Nevertheless, this is a very different version of inflation than those proposed by Guth, Linde and others, in that it doesn’t rely on scalar fields, only quantum effects. Still, this work dovetails well with that of the BICEP2 team, both discoveries having significant implications for our understanding of the universe and our future should they stand up to further inquiry.

A map of cosmic microwave background radiation. (Source: European Space Agency)

Given the quantum behavior of virtual particles in a vacuum as put forth in this paper, it’s reasonable to assume this hasn’t happened only this once, but rather many or potentially even an infinite number of times. The idea of a multitude of multiverses being generated by processes similar to those that gave rise to our own universe is not new. But this is the first time we’ve actually identified the mechanisms that may have been involved. In discussing this with one of the authors, Qing-yu Cai said he thinks their work “supports the multiverse concept.” Whether this process would result in the exact same physical laws that we see in our own universe remains to be determined, since according to these equations only limited conditions could result in an exponentially expanding true vacuum bubble.

Another idea that’s been discussed in the past is whether or not we could ever create new universes ourselves, perhaps using something like the Large Hadron Collider (LHC). However, as Qing-yu Cai observed, “space-time of our universe is a whole, it cannot be divided into small parts arbitrarily, even at LHC.” Therefore, “it seems impossible to create new universes ourselves.”

Ultimately, this mathematical proof needs to be checked out by others and ideally put to some yet-to-be-determined tests. In the end, the work may or may not be accepted. That is, after all, how the scientific method operates. But if this proof should stand up to scrutiny, it will most certainly give us considerable new insights into the mechanisms that gave birth to our cosmos. The news of this past month demonstrates that the field of cosmology remains vibrant, with new ideas and discoveries regularly being made. Our universe and the physics at its foundation are incredibly complex and will continue to yield new knowledge about our past, present and future for a long time to come. Perhaps until the end of time.

Sources and Further Reading:

Spontaneous creation of the universe from nothing. Dongshan He, Dongfeng Gao, Qing-yu Cai. Apr 4, 2014.

A Window on the Universe’s Distant Past and Future. Yonck, Richard. Mar 17, 2014.

The Origin of the Universe (text). Hawking, Stephen. Berkley lecture, Mar 13, 2007.

Inflation in Cosmology. Wikipedia.

1915 Warning: Beware of Used-Car Salesmen

 

By Ricki Rusting | December 30, 2013   

The views expressed are those of the author and are not necessarily those of Scientific American.

 


old car with doors open

Credit: Scientific American

Suspicion of used-car dealers has a long history in the U.S. if an article in a 1915 supplement to Scientific American is any guide. The story, “Buying a Second-Hand Automobile,” by Victor W. Pagé, runs for more than 3,000 words, recalling one horror story in detail and giving loads of advice on how to avoid being swindled.

“Unfortunately,” Pagé notes, “there are a number of dealers who do not hesitate to palm off any car they may have on hand without making any repairs of a permanent nature. These gentry also are prone to making misleading statements regarding the date of manufacture, power and condition. As they sell on a commission they cannot afford to make repairs, but as false testaments are cheap, plenty of claims are made that will not be supported by the performance of the machine.”

In his star example, Pagé recalls a car whose engine began malfunctioning within a week of the vehicle’s purchase. When he removed the cylinders, he found “that two of these had been badly scored by running the pistons at some time or other without adequate lubrication. In order to compensate for the lost compression, due to the scratches, a metal plate about ¼ inch thick had been riveted on each piston working in the defective cylinders.”

diagrams of motors

Credit: Scientific American

And that was just part of the car’s woes and the tricks the dealer had used to make it salable.

“The transmission system,” Pagé recounts, “was but little better.” After removing a thick grease, impregnated with what appeared to be wood fibers, from the gearset, it was seen that the intermediate and slow speed gears were so badly worn and burned that new ones had to be obtained. In addition to this, the badly worn cone clutch facing had been made to hold by driving in rubber bands between the cone and friction material at all points between the rivets where the leather could be pied up for their insertion.”

When taking the car out for a spin, moreover, “the driver had avoided using the gears as much as possible, doing all the driving on the direct drive or high speed which did not call for rotation of any gears except the constant mesh members, which seldom wear enough to cause noise because they are not clashed into engagement as the shifting members are.”

Among the tips Pagé offers: Examine the engine for superficial defects, and, after inspecting many other things, have the seller take you for drive. “Before going out, insist on having the full number of passengers the car is supposed to carry. Occupy a seat convenient to the driver so you can watch the way the car is controlled.” Also, “pick your own route for a demonstration, taking a variety of roads,” and have a demonstration of at least 50 miles.”

Finally, “do not buy a big car because it is cheap and appears to be a lot for the money.” And “do not buy some millionaire’s discarded plaything unless you have enough to maintain it, in which case you will not need to be a second-hand car.”

Police Methods For Destroying Drug Evidence Vary

 

by The Associated Press

May 25 / 2014 12:43 PM ET

COLUMBUS, Ohio (AP) — Cardboard boxes and paper envelopes packed with marijuana, cocaine and other drugs line warehouse-style shelves at the State Highway Patrol's Ohio crime laboratory, where seizures large and small are stored for safekeeping for sometimes years until they're no longer needed as evidence.

Then comes a tricky task: How do you destroy it all?

Most often by incineration, but where and how varies, according to police spokesmen and officers who oversee evidence. Arranging evidence burns can be tricky because rules are different everywhere, allowing more leeway in some places than others.

Police have used crematories, foundries, hospital incinerators or specialized businesses — and even torched drugs in 55-gallon drums.

Troopers in Ohio used to destroy thousands of pounds of seized drugs — for free — at factories where they could be vaporized in molten steel. But the companies worried about it potentially affecting the quality of their product and producing emissions: the kind that create environmental concerns and the kind that could skew employee drug tests, said Capt. David Dicken, a director at the crime lab.

"If we're throwing 940 pounds of marijuana into the vat, you know, it flares up," he said.

To maintain a dedicated drug destroyer, the agency switched last year to a paid contract with a federally permitted company that handles hazardous materials.

Federal standards regulate waste incinerators that burn pharmaceuticals, but those used only for contraband are exempt from those rules, said Dina Pierce, a spokeswoman for the Ohio Environmental Protection Agency.

Various local environmental and safety rules can apply, creating a complicated regulatory picture for evidence-management officers sorting out what destruction methods are allowed, said Joseph Latta, an instructor and executive director at the Burbank, California-based nonprofit International Association for Property and Evidence Inc.

"During the class, we say, 'Here are the ways that we've heard of. Here are the legal ways. Here are some maybe unorthodox ways that we've had to do," Latta said.

U.S. Customs and Border Patrol, which seizes millions of pounds of illegal narcotics, pays contractors to destroy the drugs or turns them over to other agencies, such as the Drug Enforcement Agency, said Jaime Ruiz, a CBP spokesman.

DEA destroys marijuana at EPA-approved incinerators because those seizures are generally bulkier, and it burns other contraband drugs at its labs, said Special Agent Rich Isaacson, spokesman for the agency's Detroit division.

In California, where environmental regulations tend to be stricter, the legal option is usually limited to EPA-approved energy-plant incinerators that operate under emissions and security standards, Latta said. But reaching those sites could be impractical for smaller, more rural law enforcement agencies that take in lesser amounts of drugs, he said, acknowledging some "have probably taken shortcuts."

It can be a dilemma for officers who must either arrange for destruction or allow drug evidence to accumulate, which risks making the storage area a potential theft target, Latta said.

Other jurisdictions have more choices: State police in the Detroit area use a metal forging plant's high-temperature furnace, but smaller posts use burn barrels. Indiana State Police have similar options. Pennsylvania State Police handle drug destruction internally, such as with a small incinerator. New York State Police use an outside contractor they won't disclose.

In West Virginia, some authorities may use fire pits, state police Capt. Joe White said.

Drug destruction arrangements with steel facilities still work for some agencies, including Columbus, Ohio, police and the FBI's Cleveland division. Cincinnati police are using that option for the first time this year because the university facility they used in the past stopped providing the service, department spokeswoman Sgt. Julian Johnson said.

Especially when drug-destruction work is pro bono, police tend to be tight-lipped about details to protect security, the businesses involved and sometimes the arrangements themselves.

"The word gets out there that this facility does it, then 50 other agencies want to go there ... and that gets to be too much for that place to handle, and then you lose that place. And then you've got to go find another one," said Sgt. Jeff Yaney, who oversees evidence for Dayton, Ohio, police.

Yaney wouldn't divulge where the department destroys drugs. It used to take advantage of the incinerator that burned classified materials at nearby Wright-Patterson Air Force Base, but the base replaced it with a shredder rather than paying for changes to meet environmental rules a few years ago.

Representatives for two federally permitted hazardous waste incinerators in Ohio, Ross Environmental Services Inc. near Elyria and the patrol's vendor, Heritage Environmental Services in East Liverpool, said they provide a more controlled, secure destruction process with environmental protections and benefits not necessarily found at other types of facilities.

For that, though, agencies generally must pay. Heritage has destroyed about 10,000 pounds of drugs and paraphernalia since July for the patrol, at a cost of roughly $22,000, said Lt. Craig Cvetan, a patrol spokesman. The funding came from seized drug money, which is also used for drug enforcement and drug-abuse prevention programs.

Associated Press writers Marc Levy in Harrisburg, Pennsylvania, Ed White in Detroit and Michael Virtanen in Albany, New York, contributed to this report.

GE’s $1 Billion Software Bet

 

To protect lucrative business servicing machines, GE turns to the industrial Internet.

To understand why General Electric is plowing $1 billion into the idea of using software to transform industry, put yourself in the shoes of Jeff Immelt, its CEO.

As recently as 2004, GE had reigned as the most valuable company on the planet. But these days, it’s not even the largest in America. Apple, Microsoft, and Google are all bigger. Software is king of the hill. And, as Immelt came to realize, GE is not that great at software.

Internal surveys had discovered that GE sold $4 billion worth of industrial software a year—the kind used to run pumps or monitor wind turbines. That’s as much as the total revenue of Salesforce.com. But these efforts were scattered and not always state-of-the-art. And that gap was turning dangerous. GE had always believed that since it knew the materials and the physics of its jet engines and medical scanners, no one could best it in understanding those machines. But companies that specialize in analytics, like IBM, were increasingly spooking GE by figuring out when big-ticket machines like a gas turbine might fail—just by studying raw feeds from gauges or vibration monitors.

This was no small thing. GE sells $60 billion a year in industrial equipment. But its most lucrative business is servicing the machines. Now software companies were looking to take a part of that pie, to get between GE and its largest source of profits. As Immelt would later say, “We cannot afford to concede how the data gathered in our industry is used by other companies.”

In 2012, GE unveiled its answer to these threats, a campaign it calls the “industrial Internet.” It included a new research lab across the bay from Silicon Valley, where it has hired 800 people, many of them programmers and data scientists.

“People have told companies like GE for years that they can’t be in the software business,” Immelt said last year. “We’re too slow. We’re big and dopey. But you know what? We are extremely dedicated to winning in the markets we’re in. And this is a to-the-death fight to remain relevant to our customers.”

Peter Evans, then a GE executive, was given the job of shaping what he calls the “meta-narrative” around GE’s big launch. Industrial companies, which prize reliability, aren’t nearly as quick to jump for new technology as consumers. So GE’s industrial-Internet pitch was structured around the huge economic gains even a 1 percent improvement in efficiency might bring to a number of industries if they used more analytics software. That number was fairly arbitrary—something safe, “just 1 percent,” recalls Evans. But here Immelt’s marketing skills came into play. “Not ‘just 1 percent’,” he said, flipping it around. GE’s slogan would be “The Power of 1 Percent.”

In a stroke, GE had shifted the discussion about where the Internet was going next. Other companies had been talking about connecting cars and people and toasters. But manufacturing and industry account for a giant slice of global GDP. “All the appliances in your home could be wired up and monitored, but the kind of money you make in airlines or health care dwarfs that,” Immelt remarked.

There is another constituency for the campaign: engineers inside GE. To them, operational software isn’t anything new. Nor are control systems—even a steam locomotive has one. But here Immelt was betting they could reinvent these systems. “You do embedded systems? My God, how boring is that? It’s like, put a bullet in your head,” says Brian Courtney, a GE manager based in Lisle, Illinois. “Now it’s the hottest job around.” At the Lisle center, part of GE’s Intelligent Platforms division, former field engineers sit in cubicles monitoring squiggles of data coming off turbines in Pakistan and oil rigs in onetime Soviet republics. Call this version 1.0 of the industrial Internet. On the walls, staff hang pictures of fish; each represents a problem, like a cracked turbine blade, that was caught early. More and more, GE will be using data to anticipate maintenance needs, says Courtney.

A challenge for GE is that it doesn’t yet have access to most of the data its machines produce. Courtney says about five terabytes of data a day comes into GE. Facebook collects 100 times as much. According to Richard Soley, head of the Industrial Internet Consortium, a trade group GE created this year, industry has been hobbled by a “lack of Internet thinking.” A jet engine has hundreds of sensors. But measurements have been collected only at takeoff, at landing, and once midflight. GE’s aviation division only recently found ways to get all the flight data. “It sounds crazy, but people just didn’t think about it,” says Soley. “It’s like the Internet revolution has just not touched the industrial revolution.”

GE is trying to close that gap. Its software center in San Ramon created an adaptation of Hadoop, big-data software used by the likes of Facebook. GE also invested $100 million in Pivotal, a cloud computing company. On the crowdsourcing site Kaggle, it launched public competitions to optimize algorithms for routing airline flights, which can save fuel.

All this could sound familiar to anyone who works with consumer Internet technology, acknowledges Bernie Anger, general manager of GE’s Intelligent Platforms division. But he says GE is thinking about what to do next to use connectivity, and more computers, to inject “new behavior” into machines. He gives the example of a field of wind turbines that communicate and move together in response to changes in wind. “We are moving into big data, but it’s not because we want to become Google,” he says. “It’s because we are dramatically evolving manufacturing.”

Tiny, cheap, and dangerous: Inside a (fake) iPhone charger

 

Thoughts on the death of Ma Ailun

According to reports, a woman in China was tragically electrocuted using her iPhone while it was charging. This seems technically plausible to me if she were using a cheap or counterfeit charger like I describe below. There's 340 volts DC inside the charger, which is enough to kill. In a cheap charger, there can be less than a millimeter separating this voltage from the output, a fraction of the recommended safe distance. These charger sometimes short out (picture), which could send lethal voltage through the USB cable. If the user closes the circuit by standing on a damp floor or touching a grounded metal surface, electrocution is a possibility. If moisture condenses in the charger (e.g. in a humid bathroom), shorting becomes even more likely. Genuine Apple chargers (and other brand-name chargers) follow strict safety regulations (teardown) so I would be surprised if this electrocution happened with a name-brand charger. Since counterfeits look just like real chargers, I'll wait for an expert to determine if a genuine Apple charger was involved or not. I've read suggestions that the house wiring might have been to blame, but since chargers are typically ungrounded I don't see how faulty house wiring would play a role. I should point out that since there are few details at this point, this is all speculation; it's possible the phone and charger weren't involved at all.

I recently wrote a popular article on the history of computer power supplies, which led to speculation on what's inside those amazingly small one-inch cube USB chargers sold by Apple, Samsung, RIM, and other companies. In the interest of science, I bought a cheap no-name cube charger off eBay for $2.79, and took it apart. It's amazing that manufacturers can build and sell a complex charger for just a few dollars. It looks a lot like a genuine Apple charger and cost a lot less. But looking inside, I found that important safety corners were cut, which could lead to a 340 volt surprise. In addition, the interference from a cheap charger like this can cause touchscreen malfunctions. Thus, I recommend spending a few dollars more to get a brand-name charger.

A one-inch USB charger designed for the iphone4

The no-name charger I bought is just over an inch in length, excluding the Eurpopean-style plug. The charger is labeled "FOR iphone4. Input 110-240V 50/60Hz Output 5.2V 1000mA, Made in China." There are no other markings (manufacturer, serial number, or safety certifications). I opened up the charger with a bit of Dremel-ing. One surprise is how much empty space is inside for a charger that's so small. Apparently the charger circuit is designed for a smaller US-style plug, and the extra space with a European plug is unused. Since the charger accepts 110 to 240V input, the same circuit can be used worldwide.[1]

Inside a USB phone charger

The power supply itself is slightly smaller than one cubic inch. The picture below shows the main components. On the left is the standard USB connector. Note how much room it takes up - it's not surprising devices are moving to micro-USB connectors. The flyback transformer is the black and yellow component; it converts the high-voltage input to the 5V output. In front of it is the switching transistor. Next to the transistor is a component that looks like a resistor but is an inductor filtering the AC input. On the underside, you can see the capacitors that filter the output and input.

Internals of a USB phone charger

The power supply is a simple flyback switching power supply. The input AC is converted to high-voltage DC by a diode, chopped into pulses by the power transistor and fed into the transformer. The transformer output is converted to low voltage DC by a diode, filtered, and fed out through the USB port. A feedback circuit regulates the output voltage at 5 volts by controlling the chopping frequency.

 

Detailed explanation

In more detail, the power supply is a self-oscillating flyback converter, also known as a ringing choke converter.[2] Unlike most flyback power supplies, which use a IC to control the oscillation, this power supply oscillates on its own through a feedback winding on the transformer. This reduces the component count and minimizes cost. A 75 cent controller IC[3] would be a huge expense for a $2.79 power supply, so they used a minimal circuit instead.

The circuit board inside a tiny USB charger


The above picture shows the circuit components; the red boxes and italics indicate components on the other side. (Click for a larger picture.) Note that most of the components are tiny surface-mounted devices (SMD) and are dwarfed by the capacitors. The green wires supply the input AC, which is filtered through the inductor. The high-voltage 1N4007 (M7) input diode and the 4.7µF input capacitor convert the AC input to 340 volts DC.[4] The MJE13003 power transistor switches the power to the transformer at a variable frequency (probably about 50kHz). The transformer has two primary windings (the power winding and a feedback winding), and a secondary winding. (The transformer and inductor are also known as "the magnetics".)
On the secondary (output) side, the high-speed
SS14 Schottky diode rectifies the transformer output to DC, which is filtered by the 470µF output capacitor before providing the desired 5V to the USB port. The two center pins of the USB port (the data pins) are shorted together with a blob of solder, as will be explained below.
A simple feedback circuit regulates the voltage. The output voltage is divided in half by a resistor divider and compared against 2.5V by the common
431 voltage reference device. The feedback is passed to the primary side through the 817B optoisolator. On the primary side, the feedback oscillation from the feedback transformer winding and the voltage feedback from the optoisolator are combined in the 2SC2411 control transistor. This transistor then drives the power transistor, closing the loop. (A very similar power supply circuit is described by Delta.[5])

Isolation and safety

For safety reasons, AC power supplies must maintain strict isolation between the AC input and the output. The circuit is divided into a primary side - connected to AC, and a secondary side - connected to the output. There can be no direct electrical connection between the two sides, or else someone touching the output could get a shock. Any connection between the two sides must go through a transformer or optoisolator. In this power supply, the transformer provides isolation of the main power, and the optoisolator provides isolation of the feedback of the secondary voltage.
If you look at the picture, you can see the isolation boundary indicated as a white line on the circuit board crossing the circuit board roughly horizontally, with the primary side on top and the secondary side below. (This line is printed on the board; I didn't add it to the picture.) The circles on the line that look like holes are, in fact, holes. These provide additional isolation between the two sides.
The UL has complex safety specifications on how much distance (known as "creepage" and "clearance") there must be between the primary and secondary sides to prevent a shock hazard.
[6] The rules are complicated and I'm no expert, but I think at least 3 or 4 mm is required. On this power supply, the average distance is about 1 millimeter. The clearance distance below R8 on the right is somewhat less than one millimeter (notice that white line crosses the PCB trace to the left of R8).
I wondered how this power supply could have met the UL standards with clearance less than 1 mm. Looking at the charger case more closely, I noticed that it didn't list any safety certifications, or even a manufacturer. I suddenly realized that purchasing the cheapest possible charger on eBay from an unknown manufacturer in China could actually be a safety hazard. Note that this sub-millimeter gap is all that's protecting you and your phone from potentially-lethal 340 volts. I also took the transformer apart and found only single layers of insulating tape between the windings, rather than the double layers required by the UL. After looking inside this charger, my recommendation is to spend a bit more on a charger, and get one that has UL approval and a name-brand manufacturer.
Another issue with super-cheap chargers is they produce poor-quality electrical output with a lot of noise that can interfere with the operation of your phone. Low-cost ringing choke adapters are known to cause touchscreen malfunctions because the screen picks up the electrical interference.
[7] In noticed several cost-saving design decisions that will increase interference. The charger uses a single diode to rectify the input, rather than a four-diode bridge, which will produce more interference. The input and output filtering are minimal compared to other designs.[8][9] There's also no fuse on the AC input, which is a bit worrying.

USB charging protocols

You might think USB chargers are interchangeable and plugging a USB device into a charger is straightforward, but it turns out that it's a mess of multiple USB charging standards,[10][11][12] devices that break the rules,[13] and proprietary protocols used by Sony and Apple.[14][15][16] The underlying problem is that a standard USB port can provide up to 500mA, so how do chargers provide 1A or more for faster charging? To oversimplify, a charger indicates that it's a charger by shorting together the two middle USB pins (D+ and D-). Proprietary chargers instead connect different resistances to the D+ and D- pins to indicate how much current they can provide. Note that there are a few unused resistor spots (R2, R3, R8, R10) connected to the USB port on the circuit above; the manufacturer can add the appropriate resistors to emulate other types of chargers.

Advances in AC power adapters

Early power adapters were just an AC transformer producing low-voltage AC, or add diodes to produce DC. In the mid 1990s, switching power supplies became more popular, because they are more compact and more efficient.[17] However, the growing popularity of AC adapters along with their tendency to waste a few watts when left plugged in ended up costing the United States billions of dollars in wasted electricity every year.[3] New Energy Star standards[18] encouraged "green" designs that use milliwatts rather than watts of power when idle. These efficient controllers can stop switching when unloaded, with intermittent bursts to get just enough power to keep running.[19] One power supply design actually achieves zero standby power usage, by running off a "supercapacitor" while idle.[20]
The semiconductor industry continues to improve switching power supplies through advances in controller ICs and switching transistors. For simple power supplies, some manufacturers combine the controller IC and the switching transistor into a single component with only 4 or 5 pins. Another technology for charger control is CC/CV, which provides constant current until the battery is charged and then constant voltage to keep it charged. To minimize electromagnetic interference (EMI), some controllers continuously vary the switching frequency to spread out the interference across a "spread spectrum".[21] Controllers can also include safety features such as overload protection, under voltage lockout, and thermal shutdown to protect against overheating,

Conclusions

Stay away from super-cheap AC adapters built by mystery manufacturers. Spend the extra few dollars to get a brand-name AC adapter. It will be safer, produce less interference, and your device's touchscreen will perform better.

Inside a inch cube cellphone charger

Notes and references

[1] Switching power supplies often take a "universal" input of 110V to 240V at 50/60 Hz, which allows the same supply to conveniently work on worldwide voltages. Because a switching power supply chops up the input into variable slices, the output voltage can be independent of the input voltage over a wide range. (This also makes switching power supplies more resistant to power brownouts.) Of course, designing the circuit to handle a wide voltage range is harder, especially for power supplies that must be very efficient across a wide range of voltages. To simplify the design of early PC power supplies, they often used a switch to select 120V or 240V input. Through a very clever doubler circuit, this switch converted the input bridge into a voltage doubler for 120V input, so the rest of the circuit could be designed for a single voltage. Modern power supplies, however, are usually designed to handle the whole voltage range which both avoids the expense of an extra switch, and ensures that users don't put the switch in the wrong position and destroy something.
[2] A comic-style explanation of flyback converters and ringing choke converters is at
TDK Power Electronics World.
[3] The cost of idle AC adapters is given as $3.5 billion to $5.4 billion for 45 TWhour of wasted electricity in the US. The article discusses solutions, and mentions that an efficient controller IC costs 75 cents. (Note that this is a huge cost for an adapter that sells for $2.79.)
Dry up avoidable leakage, EDN, Feb 1999, p96-99
[4] The DC voltage is approximately sqrt(2) times the AC voltage, since the diode charges the capacitor to the peak of the AC signal. Thus, a 240V AC input will result in approximately 340V DC inside the power supply. Because of this usage of the AC peak, only a small portion of the AC input is used, resulting in inefficiency, known as a bad power factor. For larger power supplies, power factor correction (PFC) is used to improve the power factor.
[5] The schematic of a ringing choke converter similar to the one I examined is in
Analysis and Design of Self-Oscillating Flyback Converter, Delta Products Corporation.
[6]
Safety Considerations in Power Supply Design, Texas Instruments, provides a detailed discussion of safety requirements for power supplies. Also see Calculating Creepage and Clearance Early Avoids Design Problems Later, Compliance Engineering. An online calculator for the UL 60950-1 clearance and creepage requirements is www.creepage.com.
[7] Cypress Semiconductor compared flyback converters and ringing choke converters; and ringing choke converters are significantly cheaper but very noisy electrically. Poor touchscreen performance is blamed on noisy aftermarket low cost chargers.
Noise Wars: Projected Capacitance Strikes Back, Cypress Semiconductor, Sept 2011.
[8] Power Integrations has multiple designs and schematics for
Cell Phone Charger and Adapter Applications.
[9] Power Integrations has a detailed design for a 5W cube charger based on the LinkSwitch-II controller. This circuit fits two circuit boards into the inch cube, which is pretty impressive.
5 W Cube Charger Using LinkSwitch-II and PR14 Core
[10] The official USB charging specification is Battery Charging v1.2 Spec.
[11] The updated USB standards that allow high-current charging are described in
USB battery-charger designs meet new industry standards, EDN, Feb, 2008. In summary, a charger shorts D+ and D- to indicate that it can provide 1A, compared to a regular USB port that provides up to 500mA.
[12] An up-to-date discussion of USB charging is given in
The Basics of USB Battery Charging: a Survival Guide, Maxim Application Note 4803, Dec 2010. This discusses the USB Battery Charging Specification, and how USB detects different power sources: SDP (standard computer USB ports), CDP (high-current computer USB ports with up to 1.5A), and DCP (power adapters).
[13] A guide to USB power that discusses the difference between what the USB standard says and what is actually done is "What your mom didn't tell you about USB" in
Charging Batteries Using USB Power, Maxim Application Note 3241, June 2004. In particular, USB ports do not limit current to 500mA, and might provide up to 2A. Also, USB ports generally provide power even without any enumeration.
[14] Ladyada reverse-engineered Apple chargers to determine how the voltages on the USB D+ and D- pins controls the charging current.
Minty Boost: The mysteries of Apple device charging. Also of note is the picture of the internals of a official Apple iPhone 3Gs charger, which is somewhat more complex than the charger I disassembled, using two circuit boards.
[15]
Maxim MAX14578E/MAX14578AE USB Battery Charger Detectors. This datasheet has details on the proprietary D+/D- protocols used by Apple and Sony chargers, as well as standard USB protocols.
[16]
Developing cost-effective USB-based battery chargers for automotive applications, EE Times, Feb 2011. This article describes the different types of USB charging ports and how to implement them. It mentions that Blackberry uses the USB Battery Charging 1.0 spec, Motoroloa uses the 1.1 spec, phones in China use the YDT-1591 spec, and Apple uses a proprietary protocol.
[17] Power supply technologies, Journal of Electronic Engineering, 1995, p41 reported AC adapters and chargers for portable computers, cameras, and video equipment are moving from "dropper" transformers to switching supplies.
[18] Energy Star added star ratings in 2010 for no-load power consumption, randing from 0 stars for chargers that use more than .5W idle power, to 5 stars for chargers that use under 30mW. The article also discusses constant-current/constant-voltage (CC/CV) chargers that provide constant current while charging the battery and then constant voltage to keep the battery charged.
Meeting 30 mW standby in mobile phone chargers.
[19]
A green power AC adapter design driven by power requirements, EDN Power Technology, Aug 2004, p25-26. This article describes how to build a highly-efficient AC adapter using "burst mode" during low load, and minimizing EMI interference through spread spectrum techniques.
[20]
Watt Saver for a Cell Phone AC Adaptor describes an AC adapter reference design that uses a 1 Farad super capacitor to power the controller without any AC usage when there is no load.
[21] The
Fairchild FAN103 PWM controller is designed for charger applications. It uses frequency hopping to spread out the EMI spectrum - the switching frequency varies betwen 46kHz and 54kHz. When there's no load, the controller switches into "Deep Green" mode, dropping the switching frequency to 370Hz, getting just enough power to keep running.

Google Glass não dever ser usado por longos períodos de tempo

O Google alertou os usuários do óculos Glass, que o dispositivo não deve ser usado por longos períodos de tempo. A empresa alega que o aparelho pode cansar a vista com prejuízo à saúde dos usuários.

I Don't Believe in God, But I Pray

 

Being free of God doesn't mean I have to be free of hope.

Shutterstock

A few coins lay at the patinated feet of a statue of St. Jude outside a Roman Catholic church in Brooklyn. The patron saint of desperate cases and hopeless causes stands silent as those wrestling with life dominating addictions enter one of the many Alcoholics Anonymous meetings hosted in the church's basement. 

Even though society at large has grown increasingly secular since the 1960's, inside this enclave many will encourage each other to fortify their sobriety through hitting their knees in prayer to God.

Yet some members, like "John," 43, and in his fifteenth year of recovery from addiction to drugs and alcohol, are sharing a new slogan:

"I don't believe in god, but I pray."

Alcoholics Anonymous has a long-standing tradition of using prayer as a means of combating addiction. Bill Wilson, the founder of AA., was greatly influenced by the Oxford Group, a fundamentalist Christian movement that emphasized "God-Controlled" living through prayer. Wilson ascribes his being relieved from late stage chronic alcoholism to the power of a divine being he maintained communion with through prayer over the 37 years of his sobriety.

Yet since its early days, AA has had a minority voice advocating a secular understanding of sobriety. Jim Burwell, a self-described "militant agnostic" and one of the first ten members of Alcoholics Anonymous, was instrumental in the formulation of the society's third tradition, a guiding principle which states, "the only requirement for membership is a desire to stop drinking" - not belief in God or prayer.

While some contemporary freethinkers, atheists, and agnostics who don't believe in prayer are choosing to migrate to any of the nearly 150 secular 12 step groups in the United States, some are choosing to forge a middle path within mainstream AA. This middle path embraces a non-theistic understanding of recovery, but also advocates the traditional practice of prayer as a tool in recovery. Is this a case of bad faith or are John and those like him early adopters of an emerging understanding of prayer, one that is borne out by their own experiences?

In his hooded sweatshirt and black skinny jeans, John looks young for a man in his early middle age. A bohemian from a time when an artist could afford an entire floor of a Brooklyn loft and still have cash to spend on coke- and alcohol-fueled benders, John shrugs his shoulders with indifference when asked if his is a case of bad faith.

"I don't care if they think I'm doing it right... After ten years I realized I was doing just fine without belief in God."

Yet it wasn't always like this for John. In his early days of sobriety he was uncomfortable with the Christian inspired language of the Big Book, AA's foundational text. John had no problem with other points of emphasis within AA such as community, accountability and restitution for past harms. Yet the word "god" made him enormously uncomfortable.

"I was fearful that I would have to believe in something I didn't buy into in order to get sober."

Full of doubt and desperate to be free from his addiction to alcohol and drugs and believing AA was his only option, John threw himself into prayer despite his lack of belief in a divine higher power. Fifteen years of sobriety later, John still doesn't believe in god, but he still prays. Yet asked what he is praying to John says, "I don't have a sense that there is anything listening."

For John, prayer isn't about communion with the divine. Prayer is a vocalization of his hopes and fears, a means of releasing pent up emotions and affirming a mindset of inner peace and well-being. Although he doesn't believe in the god addressed in traditional AA supplications like the Serenity Prayer, he feels they serve a purpose in helping him stay sober.

"I feel centered by the action of prayer... They have reminders in them that put me in a mindset that typically isn't my default." 

John isn't the only one advocating the benefits of secular prayer. For "Paul," 28, an agnostic who has been sober for nearly two years, prayer is about cultivating solitude. After hitting bottom with his drinking, Paul was attracted to the pluralistic tradition of Jim Burwell within AA.

"The selling point in AA was that I didn't have to believe in God and that I could pray however I wanted to."

The son of a Presbyterian minister, prayer was a large part of his upbringing. Yet the boisterous prayers his father would offer before dinner and at the pulpit left him feeling like prayer was about pontification. Because of this, "I had a really tough time with prayer for a long time."

Yet wanting the support community that AA had to offer, he was willing to toe the line in his early sobriety, going so far as to pray on his knees next to his bed per the suggestion of his more traditional sponsor.

Yet having wrestled with his own lack of belief over the course of his sobriety, Paul came to the hard won conclusion, "I don't have a beef with word god," but I don't feel comfortable with it when it refers to a personal relationship with a divine being."

From this fidelity to self Paul's understanding of prayer grew. Now Paul prays while walking, while riding the subway and even while in the bathroom - essentially anywhere where stress which may make him vulnerable to old addictive behaviors pops up. Paul uses prayer to focus on the rhythm of his breath and clear his mind. As a result of this meditative style of prayer, Paul feels he has a significant tool at his disposal in learning to lead a sober life.

In November a first-ever agnostics and atheists in recovery convention will be held in Santa Monica, California. This may be evidence that the prevailing mode of thought within AA is shifting, but will credence be given to the experiences of those those who identify as secular, but continue to make use of prayer in recovery? Regardless of the direction AA takes in the future, John and Paul feel that in the present they are embodying, one day at a time, the motto imprinted on the anniversary medallions marking their years sober: "To thine own self be true."

Nathan Frank, art editor for the website greenpointers.com and film critic for the Bushwick Film Festival, is a New Englander writing in Brooklyn. He last wrote about picking up the pieces.

Advanced light: Sending entangled beams through fast-light materials

 

Whole beams, not just particles, can be entangled. This, plus anomalous dispersion in 'fast-light' materials, allows signals be to 'advanced' over signals travelling in vacuum, at least in a limited sense.

This image depicts the experimental setup for studying fast light. Pump beams (purple) create correlated probe (turquoise) and conjugate (gold) beams. Each of these beams is aimed at a beam splitter (yellow disks). A local oscillator (LO) also sends a laser beam into each of the beam splitters. The resulting interference pattern -- registered in a spectrum analyzer, SA -- for the probe and conjugate arms are compared.

Credit: NIST

Paul Lett and his colleagues at the Joint Quantum Institute specialize in producing modulated beams of light for encoding information. They haven't found a way to move data faster than c, the speed of light in a vacuum, but in a new experiment they have looked at how light traveling through so called "fast-light" materials does seem to advance faster than c, at least in one limited sense. They report their results (online as of 25 May 2014) in the journal Nature Photonics.

Seeing how light can be manipulated in this way requires a look at several key concepts, such as entanglement, mutual information, and anomalous dispersion. At the end we'll arrive at a forefront result.

Continuous Variable Entanglement

Much research at JQI is devoted to the processing of quantum information, information coded in the form of qubits. Qubits, in turn are tiny quantum systems -- sometimes electrons trapped in a semiconductor, sometimes atoms or ions held in a trap -- maintained in a superposition of states. The utility of qubits increases when two or more of them can be yoked into a larger quantum arrangement, a process called entanglement. Two entangled photons are not really sovereign particles but parts of a single quantum entity.

The basis of entanglement is often a discrete variable, such as electron spin (whose value can be up or down) or photon polarization (say, horizontal or vertical). The essence of entanglement is this: while the polarization of each photon is indeterminate until a measurement is made, once you measure the polarization of one of the pair of entangled photons, you automatically know the other photon's polarization too.

But the mode of entanglement can also be vested in a continuous variable. In Lett's lab, for instance, two whole light beams can be entangled. Here the operative variable is not polarization but phase (how far along in the cycle of the wave you are) or intensity (how many photons are in the beam). For a light beam, phase and intensity are not discrete (up or down) but continuous in variability.

Quantum Mutual Information

Biologists examining the un-seamed strands of DNA can (courtesy of the correlated nature of nucleic acid constituents) deduce the sequence of bases along one strand by examining the sequence of the other strand. So it is with entangled beams. A slight fluctuation of the instantaneous intensity of one beam (such fluctuations are inevitable because of the Heisenberg uncertainty principle) will be matched by a comparable fluctuation in the other beam.

Lett and his colleagues make entangled beams in a process called four-wave mixing. A laser beam (pump beam) enters a vapor-filled cell. Here two photons from the pump beam are converted into two daughter photons proceeding onwards with different energies and directions. These photons constitute beams in their own right, one called the probe beam, the other called the conjugate beam. Both of these beams are too weak to measure directly. Instead each beam enters a beam splitter (yellow disk in the drawing below) where its light can be combined with light from a local oscillator (which also serves as a phase reference). The ensuing interference patterns provide aggregate phase or intensity information for the two beams.

When the beam entanglement is perfect, the mutual correlation is 1. When studying the intensity fluctuations of one beam tells you nothing about those of the other beam, then the mutual correlation is 0.

Fast-Light Material

In a famous experiment, Isaac Newton showed how incoming sunlight split apart into a spectrum of colors when it passed through a prism. The degree of wavelength-dependent dispersion for a material that causes this splitting of colors is referred to as its index of refraction.

In most materials the index is larger than 1. For plain window glass, it is about 1.4; for water it is 1.33 for visible light, and gradually increases as the frequency of the light goes up. At much higher frequency (equivalent to shorter wavelength), though, the index can change its value abruptly and go down. For glass, that occurs at ultraviolet wavelengths so you don't ordinarily see this "anomalous dispersion" effect. In a warm vapor of rubidium atoms, however, (and especially when modified with laser light) the effect can occur at infrared wavelengths, and here is where the JQI experiment looks.

In the figure above notice that the conjugate beam is sent through a second cell, filled with rubidium vapor. Here the beam is subject to dispersion. The JQI experiment aims to study how the entanglement of this conjugate beam with the probe beam (subject to no dispersion) holds up.

When the refraction is "normal" -- that is, when index of refraction causes ordinary dispersion -- the light signal is slowed in comparison with the beam which doesn't undergo dispersion. For this set of conditions, the cell is referred to as a "slow-light" material. When, however, the frequency is just right, the conjugate beam will undergo anomalous dispersion. When the different frequency components that constitute a pulse or intensity fluctuation reformulate themselves as they emerge from the cell, they will now be just slightly ahead of a pulse that hadn't gone through the cell. (To make a proper measurement of delay one needs two entangled beams -- beams whose fluctuations are related.)

Causality

No, the JQI researchers are not saying that any information is traveling faster than c. The figure above shows that the peak for the mutual information for the fast-light-material is indeed ahead of the comparable peaks for an unscattered beam or for a beam emerging from a slow-light material. It turns out that the cost of achieving anomalous dispersion at all has been that additional gain (amplification) is needed, and this amplification imposes noise onto the signal.

This inherent limitation in extracting useful information from an incoming light beam is even more pronounced with beams containing (on average) one or less-than-one photon. Such dilute beams are desirable in many quantum experiments where measurement control or the storage or delay of quantum information is important.

"We did these experiments not to try to violate causality, said Paul Lett, "but because we wanted to see the fundamental way that quantum noise "enforces" causality, and working near the limits of quantum noise also lets us examine the somewhat surprising differences between slow and fast light materials when it comes to the transport of information."

A new way to make sheets of graphene

 


Illustrated here is a new process for making graphene directly on a nonmetal substrate. First, a nickel layer is applied to the material, in this case silicon dioxide (SiO2). Then carbon is deposited on the surface, where it forms layers of graphene above and beneath the SiO2. The top layer of graphene, attached to the nickel, easily peels away using tape (or, for industrial processes, a layer of adhesive material), leaving behind just the lower layer of graphene stuck to the substrate.

Graphene's promise as a material for new kinds of electronic devices, among other uses, has led researchers around the world to study the material in search of new applications. But one of the biggest limitations to wider use of the strong, lightweight, highly conductive material has been the hurdle of fabrication on an industrial scale.

Initial work with the carbon material, which forms an atomic-scale mesh and is just a single atom thick, has relied on the use of tiny flakes, typically obtained by quickly removing a piece of sticky tape from a block of graphite -- a low-tech system that does not lend itself to manufacturing. Since then, focus has shifted to making graphene films on metal foil, but researchers have faced difficulties in transferring the graphene from the foil to useful substrates.

Now researchers at MIT and the University of Michigan have come up with a way of producing graphene, in a process that lends itself to scaling up, by making graphene directly on materials such as large sheets of glass. The process is described, in a paper published this week in the journal Scientific Reports, by a team of nine researchers led by A. John Hart of MIT. Lead authors of the paper are Dan McNerny, a former Michigan postdoc, and Viswanath Balakrishnan, a former MIT postdoc who is now at the Indian Institute of Technology.

Currently, most methods of making graphene first grow the material on a film of metal, such as nickel or copper, says Hart, the Mitsui Career Development Associate Professor of Mechanical Engineering. "To make it useful, you have to get it off the metal and onto a substrate, such as a silicon wafer or a polymer sheet, or something larger like a sheet of glass," he says. "But the process of transferring it has become much more frustrating than the process of growing the graphene itself, and can damage and contaminate the graphene."

The new work, Hart says, still uses a metal film as the template -- but instead of making graphene only on top of the metal film, it makes graphene on both the film's top and bottom. The substrate in this case is silicon dioxide, a form of glass, with a film of nickel on top of it.

Using chemical vapor deposition (CVD) to deposit a graphene layer on top of the nickel film, Hart says, yields "not only graphene on top [of the nickel layer], but also on the bottom." The nickel film can then be peeled away, leaving just the graphene on top of the nonmetallic substrate.

This way, there's no need for a separate process to attach the graphene to the intended substrate -- whether it's a large plate of glass for a display screen, or a thin, flexible material that could be used as the basis for a lightweight, portable solar cell, for example. "You do the CVD on the substrate, and, using our method, the graphene stays behind on the substrate," Hart says.

In addition to the researchers at Michigan, where Hart previously taught, the work was done in collaboration with a large glass manufacturer, Guardian Industries. "To meet their manufacturing needs, it must be very scalable," Hart says. The company currently uses a float process, where glass moves along at a speed of several meters per minute in facilities that produce hundreds of tons of glass every day. "We were inspired by the need to develop a scalable manufacturing process that could produce graphene directly on a glass substrate," Hart says.

The work is still in an early stage; Hart cautions that "we still need to improve the uniformity and the quality of the graphene to make it useful." But the potential is great, he suggests: "The ability to produce graphene directly on nonmetal substrates could be used for large-format displays and touch screens, and for 'smart' windows that have integrated devices like heaters and sensors."

Hart adds that the approach could also be used for small-scale applications, such as integrated circuits on silicon wafers, if graphene can be synthesized at lower temperatures than were used in the present study.

"This new process is based on an understanding of graphene growth in concert with the mechanics of the nickel film," he says. "We've shown this mechanism can work. Now it's a matter of improving the attributes needed to produce a high-performance graphene coating."

Christos Dimitrakopoulos, a professor of chemical engineering at the University of Massachusetts at Amherst who was not involved in this work, says, "This is a very significant piece of work for very large-area applications of graphene on insulating substrates." Compared to other methods, such as the use of a silicon carbide (SiC) substrate to grow graphene, he says, "The fact that the lateral size of graphene in the Hart group's approach is limited only by the size of the [CVD] reactor, instead of the size of the SiC wafer, is a major advantage."

"This is a high-quality and carefully executed work," Dimitrakopoulos adds.

The work was supported by Guardian Industries, the National Science Foundation, and the Air Force Office of Scientific Research.


Story Source:

The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by David L. Chandler. Note: Materials may be edited for content and length.


Journal Reference:

  1. Daniel Q. McNerny, B. Viswanath, Davor Copic, Fabrice R. Laye, Christophor Prohoda, Anna C. Brieland-Shoultz, Erik S. Polsen, Nicholas T. Dee, Vijayen S. Veerasamy, A. John Hart. Direct fabrication of graphene on SiO2 enabled by thin film stress engineering. Scientific Reports, 2014; 4 DOI: 10.1038/srep05049

Social marketing at the movies

 

May 23 / 2014

Inderscience Publishers

Word-of-mouth marketing is recognized as a powerful route from long-tail sales to blockbuster, whether one is talking about the latest fishy ice cream flavor or a Hollywood romantic comedy. In the age of social media and online networking sites, such as Twitter and Facebook, the potential for spreading the word could mean the difference between consumers seeing a product as the best thing since sliced bread or the most rotten of tomatoes.

Chong Oh, Assistant Professor of Computer Information Systems at Eastern Michigan University, in Ypsilanti, Michigan, USA, has analyzed social media measures from the well-known microblogging Twitter and movie box-office data from "boxofficemojo.com ." He found that not only does activity on Twitter, which is a surrogate for, or the online equivalent of actual word-of-mouth chatter, has a direct positive effect on how many people go to see a particular movie. Not surprising given its quarter of a billion global users. Moreover, he also demonstrated on the basis of this analysis that studio-generated content and online engagement with the putative audience has an indirect effect. His research is published in the International Journal of Information Systems and Change Management.

Fundamentally, Oh's research shows that: "The more a movie studio is willing to engage with its followers via social media the more likely it is to have a higher WOM volume. This subsequently increases the likelihood of having a higher opening-weekend box office performance."

Oh cites two very different outcomes with respect to two well-known movies. The first, John Carter, is a science fiction thriller released in 2012, that lost the studio $200 million and led to the resignation of its president. By contrast, Paranormal Activity, a low-budget movie from 2009 shot in a week on a $15,000 budget grossed $107 million at the box office. These, of course, are stark outliers, there are many more, and most movies lie somewhere between these two extremes. For the marketing department ensuring that their next movie is a Paranormal rather than a Carter is partly, according to Oh, now down to online word-of-mouth.

Simply having a presence (or profile) on social media is not sufficient. "The key activity of sending outgoing tweets in the seven days leading up to the release weekend was a good indicator that correlated to word-of-mouth volume buzz about the movie," Oh reports. He has some advice for movie marketers based on the findings from this research. "Social media represent an opportunity to reach an audience and establish relationships at a personal level that traditional advertising is not capable of achieving," he explains. "Incentives to encourage more interactions such as competition or tweets from the movie's cast members should go hand-in-hand with other advertisements to pump up word-of-mouth. He also suggests the same approach to social marketing might have a similar impact in other areas, such as music sales.


Story Source:

The above story is based on materials provided by Inderscience Publishers. Note: Materials may be edited for content and length.


Journal Reference:

  1. Chong Oh. Customer engagement, word-of-mouth and box office: the case of movie tweets. International Journal of Information Systems and Change Management, 2013; 6 (4): 338 DOI: 10.1504/IJISCM.2013.060976

Flatland optics with graphene: Smaller and faster photonic devices and circuits

 

May 23 / 2014

Basque Research

Researchers have introduced a platform technology based on optical antennas for trapping and controlling light with the one-atom-thick material graphene. The experiments show that the dramatically squeezed graphene-guided light can be focused and bent, following the fundamental principles of conventional optics. The work opens new opportunities for smaller and faster photonic devices and circuits.


nanoGUNE.

Researchers from CIC nanoGUNE, in collaboration with ICFO and Graphenea, introduce a platform technology based on optical antennas for trapping and controlling light with the one-atom-thick material graphene. The experiments show that the dramatically squeezed graphene-guided light can be focused and bent, following the fundamental principles of conventional optics. The work, published yesterday in Science, opens new opportunities for smaller and faster photonic devices and circuits.

Optical circuits and devices could make signal processing and computing much faster. "However, although light is very fast it needs too much space," explains Rainer Hillenbrand, Ikerbasque Professor at nanoGUNE and UPV/EHU. In fact, propagating light needs at least the space of half its wavelength, which is much larger than state-of-the-art electronic building blocks in our computers. For that reason, a quest for squeezing light to propagate it through nanoscale materials arises.

The wonder material graphene, a single layer of carbon atoms with extraordinary properties, has been proposed as one solution. The wavelength of light captured by a graphene layer can be strongly shortened by a factor of 10 to 100 compared to light propagating in free space. As a consequence, this light propagating along the graphene layer -- called graphene plasmon -- requires much less space.

However, transforming light efficiently into graphene plasmons and manipulating them with a compact device has been a major challenge. A team of researchers from nanoGUNE, ICFO and Graphenea -- members of the EU Graphene Flagship -- now demonstrates that the antenna concept of radio wave technology could be a promising solution. The team shows that a nanoscale metal rod on graphene (acting as an antenna for light) can capture infrared light and transform it into graphene plasmons, analogous to a radio antenna converting radio waves into electromagnetic waves in a metal cable.

"We introduce a versatile platform technology based on resonant optical antennas for launching and controlling of propagating graphene plasmons, which represents an essential step for the development of graphene plasmonic circuits," says team leader Rainer Hillenbrand. Pablo Alonso-González, who performed the experiments at nanoGUNE, highlights some of the advantages offered by the antenna device: "the excitation of graphene plasmons is purely optical, the device is compact and the phase and wavefronts of the graphene plasmons can be directly controlled by geometrically tailoring the antennas. This is essential to develop applications based on focusing and guiding of light."

The research team also performed theoretical studies. Alexey Nikitin, Ikerbasque Research Fellow at nanoGUNE, performed the calculations and explains that "according to theory, the operation of our device is very efficient, and all the future technological applications will essentially depend upon fabrication limitations and quality of graphene."

Based on Nikitin´s calculations, nanoGUNE's Nanodevices group fabricated gold nanoantennas on graphene provided by Graphenea. The Nanooptics group then used the Neaspec near-field microscope to image how infrared graphene plasmons are launched and propagate along the graphene layer. In the images, the researchers saw that, indeed, waves on graphene propagate away from the antenna, like waves on a water surface when a stone is thrown in.

In order to test whether the two-dimensional propagation of light waves along a one-atom-thick carbon layer follow the laws of conventional optics, the researchers tried to focus and refract the waves. For the focusing experiment, they curved the antenna. The images then showed that the graphene plasmons focus away from the antenna, similar to the light beam that is concentrated with a lens or concave mirror.

The team also observed that graphene plasmons refract (bend) when they pass through a prism-shaped graphene bilayer, analogous to the bending of a light beam passing through a glass prism. "The big difference is that the graphene prism is only two atoms thick. It is the thinnest refracting optical prism ever," says Rainer Hillenbrand. Intriguingly, the graphene plasmons are bent because the conductivity in the two-atom-thick prism is larger than in the surrounding one-atom-thick layer. In the future, such conductivity changes in graphene could be also generated by simple electronic means, allowing for highly efficient electric control of refraction, among others for steering applications.

Altogether, the experiments show that the fundamental and most important principles of conventional optics also apply for graphene plasmons, in other words, squeezed light propagating along a one-atom-thick layer of carbon atoms. Future developments based on these results could lead to extremely miniaturized optical circuits and devices that could be useful for sensing and computing, among other applications.


Story Source:

The above story is based on materials provided by Basque Research. Note: Materials may be edited for content and length.


Journal Reference:

  1. P. Alonso-Gonzalez, A. Y. Nikitin, F. Golmar, A. Centeno, A. Pesquera, S. Velez, J. Chen, G. Navickaite, F. Koppens, A. Zurutuza, F. Casanova, L. E. Hueso, R. Hillenbrand. Controlling graphene plasmons with resonant metal antennas and spatial conductivity patterns. Science, 2014; DOI: 10.1126/science.1253202