sexta-feira, 10 de outubro de 2014

Prá você

 

Snap 2014-10-10 at 20.18.39

Tesla unveals AWD 691hp Model S P85D with autopilot and new driver assist systems

 

The new Tesla Model S P85D will run an AWD setup and put out a whopping 691 hp (515 kW) an...

The new Tesla Model S P85D will run an AWD setup and put out a whopping 691 hp (515 kW) and 687 lb. ft. (931 Nm) of torque

Image Gallery (14 images)

Tesla finally took the wraps off its much rumored Model D at Hawthorn Airport outside of Los Angeles tonight and the results look impressive to say the least. The new S P85D, as it's badged, will run an AWD setup as suspected, and put out a whopping 691 hp (515 kW) and 687 lb. ft. (931 Nm) of torque. It will also feature new driver-assist systems including an auto-pilot solution for self-parking.

The car, identical on the outside to the Model S, now features dual motors, one for each axle. The Model S felt fast enough before as some readers may recall from my test drive last year. Now the house of Elon has upped the already impressive 416 hp (310 kW) to a supercar-fighting 691 hp. That figure is the same as you’ll find in your Lamborghini Aventador owner’s manual. But what the Aventador can’t do, and where the all-electric Tesla excels, is in the important category of torque.

Running a dual motor setup, the rear axle generator in the top of the line P85D now produces 470 hp (350 kW), while the front motor brings an additional 221 hp (165 kW) to the table. So there’s our 691 hp, but about that torque? The new twin engined Model S now generates an island pulling, tree hauling, yacht parking 687 lb. ft of torque. That figure by itself is outrageous, but the fact that all 687 lb. ft. (up from 443 lb. ft/600 Nm) is available to the driver at 0 rpm. Wow. No waiting for pistons to get their pants on or turbos to do their hair, the torque is right there as soon as the launch peddle is depressed.

These figures now give the electric saloon the ability to hit 100 km/h (62 mph) from a standing start in only 3.2 seconds, which is a full 1.2 seconds faster than the Model S P85 and on par with the likes of McLaren, Ferrari and Lamborghini in terms of raw acceleration ability. For those straight line fans, that equates out to a quarter mile run of 11.8 seconds, down from the already quick 12.6 in the P85S.

For the lessor powered models like the 60D and 85D, acceleration times aren't quite as quick. The 60D can make the 0-60 mph (96 km/h) run in a respectable 5.7 seconds, while the 85D is slightly quicker at 5.2. Both cars develop 376 hp, with 188 hp coming from both the front and rear motors. Top speed for the 60D is said to be 125 mph (201 km/h) while the 85D and P85D are rated at 155 mph (250 km/h).

Tesla's new Model D is identical in appearance to the Model S (pictured)

Tesla's new Model D is identical in appearance to the Model S (pictured)

But with the new motor up front, weight concerns are a consideration. Turns out the new system adds only another 291 lb (132 kg) to the car, bringing the D’s totally weight to a not so dainty 4,936 lb (2,239 kg). Tesla spins it to the positive by pointing out that front to rear weight balance is now a perfect 50/50 thanks to the new front motor. The addition of AWD will also help increase the car’s already sticky road-holding manners, plus appeal to more northern Canadian folks like myself where snow and ice are an 8-month a year concern.

Mileage for the Model D also doesn’t suffer significantly, taking only a 10 mile hit on range – down from 285 mi (458 km) to 275 mi (442 km). For the lower powered 85D and 60D models, power drops quite a bit to 376 hp 280 and only 362 lb. ft. (491 Nm) of torque, but then again, for these models pricing drops while mileage range rises. Tesla is also claiming that the car can pull off 1 g of acceleration in its new format, which means bring your neck brace and Velcro driving mitts to the test drive.

The Model S P85D cutaway showing the new front mounted generator

The other component to the Model D’s unveiling was part of the tease when Tesla first announced the car a few weeks back, and that "other thing" as it turns out, is a new automated parking and driver assist system.

The new driver assist system allows the car to read speed signs and adjust speed accordingly. The system also lets the car not only keep lanes, but also change lanes by itself once the signal is activated thanks to 12 new sensors mounted in critical areas about the car. These systems already exist in some form on similar high-end saloons, but Tesla is set on making its offering the most intuitive and advanced on the market.

While not exactly autonomous, the new Model D will also have the ability to park itself in the garage or even come to the owner thanks to the new sensor system, though there's no details as yet on how far the car will travel or how the system will be activated.

Finally, a new electromechanical braking system has been put in place to help stop the car when required.

Musk noted that every car built in the last two weeks features the new sensor array, while the auto-pilot program will be updated remotely over the next few weeks.

Mileage for the Model D doesn’t suffer significantly, taking only a 10 mile hit on range –...

Mileage for the Model D doesn’t suffer significantly, taking only a 10 mile hit on range – down from 285 miles (458 km) to 275 miles (442 km) (Model S pictured)

The cost of the new Model S P85D is US$120,000, which as it turns out isn’t that much more than the rear wheel drive S. As of tonight, Tesla has already started taking orders on the Model D, with delivery promised towards year end or in the new year.

Source: Tesla

 

What 20 years of research on cannabis use has taught us

 


A new article summarizes what scientists have learned over twenty years of research about marijuana use, and its health and brain consequences.

In the past 20 years recreational cannabis use has grown tremendously, becoming almost as common as tobacco use among adolescents and young adults, and so has the research evidence. A major new review in the scientific journal Addiction sets out the latest information on the effects of cannabis use on mental and physical health.

The key conclusions are:

Adverse Effects of Acute Cannabis Use

  • Cannabis does not produce fatal overdoses.
  • Driving while cannabis-intoxicated doubles the risk of a car crash; this risk increases substantially if users are also alcohol-intoxicated.
  • Cannabis use during pregnancy slightly reduces birth weight of the baby.

Adverse Effects of Chronic Cannabis Use

  • Regular cannabis users can develop a dependence syndrome, the risks of which are around 1 in 10 of all cannabis users and 1 in 6 among those who start in adolescence.
  • Regular cannabis users double their risks of experiencing psychotic symptoms and disorders, especially if they have a personal or family history of psychotic disorders, and if they start using cannabis in their mid-teens.
  • Regular adolescent cannabis users have lower educational attainment than non-using peers but we don't know whether the link is causal.
  • Regular adolescent cannabis users are more likely to use other illicit drugs, but we don't know whether the link is causal.
  • Regular cannabis use that begins in adolescence and continues throughout young adulthood appears to produce intellectual impairment, but the mechanism and reversibility of the impairment is unclear.
  • Regular cannabis use in adolescence approximately doubles the risk of being diagnosed with schizophrenia or reporting psychotic symptoms in adulthood.
  • Regular cannabis smokers have a higher risk of developing chronic bronchitis.
  • Cannabis smoking by middle aged adults probably increases the risk of myocardial infarction.

Story Source:

The above story is based on materials provided by Wiley. Note: Materials may be edited for content and length.


Journal Reference:

  1. Wayne Hall. What has research over the past two decades revealed about the adverse health effects of recreational cannabis use? Addiction, 2014; DOI: 10.1111/add.12703

 

Investigation into GI scope-related infections changes national guidelines

 


National guidelines for the cleaning of certain gastrointestinal (GI) scopes are likely to be updated due to findings from UPMC's infection prevention team.

The research and updated disinfection technique will be shared Saturday in Philadelphia at ID Week 2014, an annual meeting of health professionals in infectious disease fields.

"Patient safety is our top priority," said senior author Carlene Muto, M.D., M.S., director of infection prevention at UPMC Presbyterian Hospital. "We are confident that the change from disinfection to sterilization of GI scopes is necessary in preventing serious infections and we are glad to share our findings with hospitals nationwide."

After tracking and monitoring an uptick in antibiotic-resistant infections in 2012 in patients who had undergone an Endoscopic Retrograde Cholangiopancreatography (ERCP) procedure with flexible endoscopy scopes, UPMC began investigating the devices, which are equipped with an "elevator channel" used to deflect accessories passed through the biopsy channel and assist clinicians in examining a patient's gastrointestinal tract. The elevator channel is most commonly found on ERCP and endoscopic ultrasound scopes.

UPMC took the scopes out of service, notified the manufacturer and began an investigation into the disinfecting process that takes place between each use. When it was ultimately determined that the normal process failed to eliminate all bacteria, UPMC switched to gas sterilization using ethylene oxide to ensure proper disinfection of the scopes.

"Throughout UPMC, no additional health care-associated infections have been linked to scopes since switching to sterilization," said Dr. Muto.

The move from high-level disinfection of endoscopes to sterilization of them was foreshadowed earlier this year at the Association for Professionals in Infection Control and Epidemiology annual conference in Anaheim, Calif., by Bill Rutala, Ph.D., M.P.H., author of the Centers for Disease Control and Prevention Guideline for Disinfection and Sterilization in Healthcare Facilities. He said he believed the transition would take place in the next five years.

Approximately 11 million gastrointestinal endoscopies are performed annually in the U.S. and contaminated scopes have been linked to more hospital-acquired infections than any other type of medical device.


Story Source:

The above story is based on materials provided by University of Pittsburgh Schools of the Health Sciences. Note: Materials may be edited for content and length.


 

Drinking decaf or regular coffee maybe good for the liver, study suggests

 


Coffee (regular or decaf) may be good for your liver. Higher coffee consumption, regardless of caffeine content, was linked to lower levels of abnormal liver enzymes.

Researchers from the National Cancer Institute report that decaffeinated coffee drinking may benefit liver health. Results of the study published in Hepatology, a journal of the American Association for the Study of Liver Diseases, show that higher coffee consumption, regardless of caffeine content, was linked to lower levels of abnormal liver enzymes. This suggests that chemical compounds in coffee other than caffeine may help protect the liver.

Coffee consumption is highly prevalent with more than half of all Americans over 18 drinking on average three cups each day according to a 2010 report from the National Coffee Association. Moreover, the International Coffee Association reports that coffee consumption has increased one percent each year since the 1980s, increasing to two percent in recent years. Previous studies found that coffee consumption may help lower the risk of developing diabetes, cardiovascular disease, non-alcoholic fatty liver disease, cirrhosis, and liver cancer.

"Prior research found that drinking coffee may have a possible protective effect on the liver. However, the evidence is not clear if that benefit may extend to decaffeinated coffee," explains lead researcher Dr. Qian Xiao from the National Cancer Institute in Bethesda, Maryland.

For the present study researchers used data from the U.S. National Health and Nutrition Examination Survey (NHANES, 1999-2010). The study population included 27,793 participants, 20 years of age or older, who provided coffee intake in a 24-hour period. The team measured blood levels of several markers of liver function, including aminotransferase (ALT), aminotransferase (AST), alkaline phosphatase (ALP) and gamma glutamyl transaminase (GGT) to determine liver health.

Participants who reported drinking three or more cups of coffee per day had lower levels of ALT, AST, ALP and GGT compared to those not consuming any coffee. Researchers also found low levels of these liver enzymes in participants drinking only decaffeinated coffee.

Dr. Xiao concludes, "Our findings link total and decaffeinated coffee intake to lower liver enzyme levels. These data suggest that ingredients in coffee, other than caffeine, may promote liver health. Further studies are needed to identify these components."


Story Source:

The above story is based on materials provided by Wiley. Note: Materials may be edited for content and length.


Journal Reference:

  1. Qian Xiao, Rashmi Sinha, Barry I. Graubard, Neal D. Freedman. Inverse associations of total and decaffeinated coffee with liver enzyme levels in NHANES 1999-2010. Hepatology, 2014; DOI: 10.1002/hep.27367

 

Dissolvable silicon circuits and sensors

 


A new generation of transient electronic devices function in water but dissolve when their function is no longer needed.

Electronic devices that dissolve completely in water, leaving behind only harmless end products, are part of a rapidly emerging class of technology pioneered by researchers at the University of Illinois at Urbana-Champaign. Early results demonstrate the entire complement of building blocks for integrated circuits, along with various sensors and actuators with relevance to clinical medicine, including most recently intracranial monitors for patients with traumatic brain injury. The advances suggest a new era of devices that range from green consumer electronics to 'electroceutical' therapies, to biomedical sensor systems that do their work and then disappear.

John A. Rogers' research group at the Department of Materials Science and Engineering Frederick Seitz Materials Research Laboratory is leading the development of such concepts, along with all of the required materials, device designs and fabrication techniques for applications that lie beyond the scope of semiconductor technologies that are available today.

"Our most recent combined developments in devices that address real challenges in clinical medicine and in advanced, high volume manufacturing strategies suggest a promising future for this new class of technology," said Rogers. He will present these and other results at the AVS 61st International Symposium & Exhibition, being held November 9-14, 2014 in Baltimore, Md.

Practical applications might include: bioresorbable devices that reduce infection at a surgical site. Other examples are temporary implantable systems, such as electrical brain monitors to aid rehabilitation from traumatic injuries or electrical simulators to accelerate bone growth. Additional classes of devices can even be used for programmed drug delivery, Rogers said.

Such envisioned uses are all best satisfied by devices that provide robust, reliable, high performance operation, but only for a finite period of time dictated, for example, by the healing process -- they are not only biologically compatible, but they are biologically punctual, performing when and as the body needs them.

After their function has been fulfilled, they disappear through resorption into the body, thereby eliminating unnecessary device load, without the need for additional surgical operations. In terms of consumer electronics, the technology holds promise for reducing the environmental footprint of the next generation of "green" devices.


Story Source:

The above story is based on materials provided by AVS: Science & Technology of Materials, Interfaces, and Processing. Note: Materials may be edited for content and length.


 

New technique yields fast results in drug, biomedical testing

 


A new technique makes it possible to quickly detect the presence of drugs or to monitor certain medical conditions using only a single drop of blood or urine, representing a potential tool for clinicians and law enforcement.

The technique works by extracting minute quantities of target molecules contained in specimens of blood, urine or other biological fluids, and then testing the sample with a mass spectrometer.

Testing carried out with the technology takes minutes, whereas conventional laboratory methods take hours or days to yield results and require a complex sequence of steps, said Zheng Ouyang (pronounced Jung O-Yong), an associate professor in Purdue University's Weldon School of Biomedical Engineering.

"We've converted a series of operations into a single extraction process requiring only a pinprick's worth of blood," he said.

The method, called "slug flow microextraction," could be used to detect steroids in urine for drug screening in professional sports and might be combined with a miniature mass spectrometer also being commercialized. The combined technologies could bring a new class of compact instruments for medicine and research, Ouyang said.

Findings are detailed in a paper appeared online Oct. 5 in the research journal Angewandte Chemie International Edition. The paper was authored by graduate student Yue Ren, undergraduate student Morgan N. McLuckey, former postdoctoral research associate Jiangjiang Liu and Ouyang.

The researchers demonstrated the technique, using it to perform therapeutic-drug monitoring, which has potential applications in drug development and personalized therapy; to monitor enzyme function, as demonstrated for acetylcholinesterase, which is directly related to the symptoms and therapy for Alzheimer's disease; to detect steroids, yielding results in one minute; and to test for illicit drugs.

"In the future, for example, parents might be able to test their children's urine for drugs with a simple cartridge they would take to the corner drug store, where a desktop mass spectrometer would provide results in a few minutes," Ouyang said.

The technique involves drawing a specimen into a glass capillary that also contains the organic solvent ethyl acetate.

Like oil and water, the two fluids are immiscible, and an interface is formed between the specimen and the solvent. Gently rocking the capillary back and forth several times causes small amounts of target molecules in the biological sample to cross this interface into the solvent side without mixing the two fluids.

"You don't want to mix these two, you want to extract only the biomarkers you are looking for and leave the junk behind because mass spectrometry is very sensitive to impurities," Ouyang said.

Then the solvent containing the biomarkers is subjected to a high-voltage current, ionizing the sample so that it can be analyzed with mass spectrometry.

Researchers have used microextraction for other applications.

"I think this is the first time it has been applied to a biological sample for mass spectrometry," Ouyang said. "You just use a pinprick of blood, and the analysis is completed in minutes."

When combined with the miniature mass spectrometer also developed at Purdue the method represents a mobile system for medical professionals, researchers and law enforcement.

Mass spectrometry works by turning molecules into ions, or electrically charged versions of themselves, inside the instrument's vacuum chamber. Once ionized, the molecules can be more easily manipulated, detected and analyzed based on their masses. The new approach uses a method called nanoESI -- or nano electrospray ionization -- in which the ionization step is performed in the air or directly on surfaces and does not require a vacuum chamber.

Although the research was conducted using a conventional laboratory mass spectrometer, the same nanoESI operation could be carried out with the new miniature mass spectrometer. Whereas conventional mass spectrometers are bulky instruments that weigh more than 300 pounds, Purdue researchers have recently completed their latest version of the miniature mass spectrometer, the Mini 12, which weighs 40 pounds, is 12.5 inches wide and 16 inches high.

"The sampling ionization technologies like slug flow microextraction could make the miniature mass spectrometers perform the actual testing without requiring other equipment for sample treatment," Ouyang said. "This will bring a new class of compact medical instruments."

The research has been funded by the National Institutes of Health.

The work to develop the miniature mass spectrometer has been supported by the NIH and National Science Foundation and is led by Ouyang and R. Graham Cooks, the Henry Bohn Hass Distinguished Professor of Chemistry in Purdue's College of Science.

U.S. patent applications have been filed for the microextraction and miniature mass spectrometry. The technologies may be commercialized through a new company formed after partnership agreements were signed in 2013 by Purdue and Tsinghua University.

"The overall goal is to use this technology for developing disposable sample cartridges to work with our mini mass spectrometry system in clinical and especially the point-of-care applications in a doctor's office," Ouyang said.


Story Source:

The above story is based on materials provided by Purdue University. The original article was written by Emil Venere. Note: Materials may be edited for content and length.


Journal Reference:

  1. Yue Ren, Morgan N. McLuckey, Jiangjiang Liu, Zheng Ouyang. Direct Mass Spectrometry Analysis of Biofluid Samples Using Slug-Flow Microextraction Nano-Electrospray Ionization. Angewandte Chemie International Edition, 2014; DOI: 10.1002/anie.201408338

 

12 great free online courses

 

Much ado has been made in recent years over the quickly rising cost of healthcare in the United States. But the cost of college tuition and fees has skyrocketed at nearly twice that rate. Going to college today will cost a student 559% more than it did in 1985, on average.

In an exciting talk given at TEDGlobal 2012, Stanford professor Daphne Koller explains why she was inspired — alongside fellow professor Andrew Ng — to create Coursera, which brings great classes from top universities online for free. Coursera classes have specific start dates, require students to take quizzes and turn in assignments, as well as allowing professors to customize their course into online chunks rather than simply recording their lectures.

When she spoke at TED Global, Coursera offered classes from four top colleges — Princeton University, the University of Michigan, Stanford University and the University of Pennsylvania — but in July, Coursera announced that they had increased to 16 participating colleges, including five of the schools considered the top 10 in the country by the U.S. News & World Report. The site now offers 116 classes.

Even outside of Coursera, the number of college classes available on a computer screen rather than in a brick-and-mortar lecture hall is staggering. At TEDxEastside Prep, Scott Young gave the intriguing talk — “Can you get an MIT education for $2,000?” — in which he shared his effort to get an MIT education in computer science by taking the school’s Open Courseware free online courses. The result? He’s currently taken — as well as passed exams and completed programming assignments for —  20 of the 33 courses in schools’ curriculum.

Inspired by Young, below, find 12 courses you could take for a completely free TED degree in Big Ideas.

The Course: Introduction to Artificial Intelligence
The School: Stanford, via YouTube
Taught By: Peter Norvig, Sebastian Thrun
Course Description: Artificial Intelligence is the science of making computer software that reasons about the world around it. Humanoid robots, Google Goggles, self-driving cars, even software that suggests music you might like to hear are all examples of AI. In this class, you will learn how to create this software from two of the leaders in the field.
Notes: When Thrun and Norvig first put this course online in the fall of 2011, 160,000 students from 209 countries enrolled. While the course is closed, you can still
watch the lectures on YouTube. And see Norvig discuss what he learned teaching the course in the TEDTalk, “The 100,000 student classroom.”

The Course: The Structure of English Words
The School: Stanford, via iTunes
Taught By: Will Leben
Course Description: Thanks to historical, cultural, and linguistic factors, English has by far the world’s largest vocabulary—leading many of us to have greater than average difficulty with words, and some of us to have greater than average curiosity about words. Our historical and linguistic study will cover both erudite and everyday English, with special attention to word meaning and word use, to both rules and exceptions. Most words originated with an image. “Reveal” = “pull back the veil,” “depend” = “hang down from.” Change is constant. “Girl” once meant “a young child of either sex;” an early synonym for “stupid” was “nice.” Are there good changes and bad ones? And who gets to decide?

The Course: Physics for Future Presidents
The School: University of California Berkeley, via YouTube
Taught By: Richard A. Muller and Bob Jacobsen
Course Description: Contains the essential physics that students need in order to understand today’s core science and technology issues, and to become the next generation of world leaders. From the physics of energy to climate change, and from spy technology to quantum computers, this is a look at the modern physics affecting the decisions of political leaders and CEOs and, consequently, the lives of every citizen. How practical are alternative energy sources? Can satellites really read license plates from space? What is the quantum physics behind iPods and supermarket scanners? And how much should we fear a terrorist nuke?”
Note: A complete guide is available to anyone who wants to teach the class at their university.

The Course: Dilemmas in Bio-Medical Ethics: Playing God or Doing Good?
The School: MIT, via Open Courseware
Taught By: Erica James
Course Description: This course is an introduction to the cross-cultural study of bio-medical ethics. It examines moral foundations of the science and practice of western bio-medicine through case studies of abortion, contraception, cloning, organ transplantation, and other issues. It also evaluates challenges that new medical technologies pose to the practice and availability of medical services around the globe, and to cross-cultural ideas of kinship and personhood. It discusses critiques of the bio-medical tradition from anthropological, feminist, legal, religious, and cross-cultural theorists.

The Course: Videogame Theory and Analysis
The School: MIT, via Open Courseware
Taught By: Alice Robison
Course Description: This course will serve as an introduction to the interdisciplinary academic study of videogames, examining their cultural, educational, and social functions in contemporary settings. By playing, analyzing, and reading and writing about videogames, we will examine debates surrounding how they function within socially situated contexts in order to better understand games’ influence on and reflections of society.

The Course: Sets, Counting and Probability
The School: Harvard, via the Open Learning Initiative
Taught By: Paul G. Bamberg
Course Description: This online math course develops the mathematics needed to formulate and analyze probability models for idealized situations drawn from everyday life. Topics include elementary set theory, techniques for systematic counting, axioms for probability, conditional probability, discrete random variables, infinite geometric series, and random walks. Applications to card games like bridge and poker, to gambling, to sports, to election results, and to inference in fields like history and genealogy, national security, and theology.

The Course: Introduction to Aerospace Engineering and Design
The School: MIT, via Open Courseware
Taught By: Dava Newman
Course Description: The fundamental concepts, and approaches of aerospace engineering, are highlighted through lectures on aeronautics, astronautics, and design. Active learning aerospace modules make use of information technology. Student teams are immersed in a hands-on, lighter-than-air (LTA) vehicle design project, where they design, build, and fly radio-controlled LTA vehicles. The connections between theory and practice are realized in the design exercises.

The Course: Shakespeare After All: The Later Plays
The School: Harvard
Taught By: Marjorie Garber
Course Description: This free online Shakespeare course focuses on Shakespeare’s later plays beginning with Measure for Measure and ending with The Tempest. This course takes note of key themes, issues, and interpretations of the plays, focusing on questions of genre, gender, politics, family relations, silence and speech, and cultural power from both above and below (royalty, nobility, and the court; clowns and fools).

The Course: Securing Digital Democracy
The School: University of Michigan, via Coursera
Taught By: J. Alex Halderman
Course Description: Computer technology has transformed how we participate in democracy. The way we cast our votes, the way our votes are counted, and the way we choose who will lead are increasingly controlled by invisible computer software. Most U.S. states have adopted electronic voting, and countries around the world are starting to collect votes over the Internet. However, computerized voting raises startling security risks that are only beginning to be understood outside the research lab, from voting machine viruses that can silently change votes to the possibility that hackers in foreign countries could steal an election. This course will provide the technical background and public policy foundation that 21st century citizens need to understand the electronic voting debate. You’ll come away from this course understanding why you can be confident your own vote will count — or why you should reasonably be skeptical.

The Course: Galaxies and Cosmology
The School: California Institute of Technology, via Coursera
Taught By: S. George Djorgovski
Course Description: This class is an introduction to the modern extragalactic astronomy and cosmology, i.e., the part of astrophysics that deals with the structure and evolution of the universe as a whole. It will cover the subjects including: relativistic cosmological models and their parameters, extragalactic distance scale, cosmological tests, composition of the universe, dark matter, and dark energy; the hot big bang, cosmic nucleosynthesis, recombination, and cosmic microwave background; formation and evolution of structure in the universe; galaxy clusters, large-scale structure and its evolution; galaxies, their properties and fundamental correlations; formation and evolution of galaxies; star formation history of the universe; quasars and other active galactic nuclei, and their evolution; structure and evolution of the intergalactic medium; diffuse extragalactic backgrounds; the first stars, galaxies, and the reionization era.

The Course: Fantasy and Science Fiction: The Human Mind, Our Modern World
The School: University of Michigan, via Coursera
Taught By: Eric Rabkin
Course Description: Fantasy is a key term both in psychology and in the art and artifice of humanity. The things we make, including our stories, reflect, serve, and often shape our needs and desires. We see this everywhere from fairy tale to kiddie lit to myth; from “Cinderella” to Alice in Wonderland to Superman; from building a fort as a child to building ideal, planned cities as whole societies. Fantasy in ways both entertaining and practical serves our persistent needs and desires and illuminates the human mind. Fantasy expresses itself in many ways, from the comfort we feel in the godlike powers of a fairy godmother to the seductive unease we feel confronting Dracula. This course will explore Fantasy in general and Science Fiction in specific both as art and as insights into ourselves and our world.

The Course: Bits: The Computer Science of Digital Information
The School: Harvard, via the Open Learning Initiative
Taught By: Harry R. Lewis
Course Description: This course focuses on information as quantity, resource, and property. We study the application of quantitative methods to understanding how information technologies inform issues of public policy, regulation, and law. How are music, images, and telephone conversations represented digitally, and how are they moved reliably from place to place through wires, glass fibers, and the air? Who owns information, who owns software, what forms of regulation and law restrict the communication and use of information, and does it matter? How can personal privacy be protected at the same time that society benefits from communicated or shared information?

Snap 2014-10-10 at 16.28.55

10 Habits of Happy Couples

 

Improve your relationship today with these tried-and-true tips

By Mark Goulston

Photo by: Comstock

What does it take to be happy in a relationship? If you're working to improve your marriage, here are the 10 habits of happy couples.

1. Go to bed at the same time
Remember the beginning of your relationship, when you couldn't wait to go to bed with each other to make love? Happy couples resist the temptation to go to bed at different times. They go to bed at the same time, even if one partner wakes up later to do things while their partner sleeps.

2. Cultivate common interests
After the passion settles down, it's common to realize that you have few interests in common. But don't minimize the importance of activities you can do together that you both enjoy. If common interests are not present, happy couples develop them. At the same time, be sure to cultivate interests of your own; this will make you more interesting to your mate and prevent you from appearing too dependent.

3. Walk hand in hand or side by side
Rather than one partner lagging or dragging behind the other, happy couples walk comfortably hand in hand or side by side. They know it's more important to be with their partner than to see the sights along the way.

4. Make trust and forgiveness your default mode
If and when they have a disagreement or argument, and if they can't resolve it, happy couples default to trusting and forgiving rather than distrusting and begrudging.

5. Focus more on what your partner does right than what he or she does wrong
If you look for things your partner does wrong, you can always find something. If you look for what he or she does right, you can always find something, too. It all depends on what you want to look for. Happy couples accentuate the positive.

6. Hug each other as soon as you see each other after work
Our skin has a memory of "good touch" (loved), "bad touch" (abused) and "no touch" (neglected). Couples who say hello with a hug keep their skin bathed in the "good touch," which can inoculate your spirit against anonymity in the world.

7. Say "I love you" and "Have a good day" every morning
This is a great way to buy some patience and tolerance as each partner sets out each day to battle traffic jams, long lines and other annoyances.

8. Say "Good night" every night, regardless of how you feel
This tells your partner that, regardless of how upset you are with him or her, you still want to be in the relationship. It says that what you and your partner have is bigger than any single upsetting incident.

9. Do a "weather" check during the day
Call your partner at home or at work to see how his or her day is going. This is a great way to adjust expectations so that you're more in sync when you connect after work. For instance, if your partner is having an awful day, it might be unreasonable to expect him or her to be enthusiastic about something good that happened to you.

10. Be proud to be seen with your partner
Happy couples are pleased to be seen together and are often in some kind of affectionate contact -- hand on hand or hand on shoulder or knee or back of neck. They are not showing off but rather just saying that they belong with each other.

Happy couples have different habits than unhappy couples. A habit is a discrete behavior that you do automatically and that takes little effort to maintain. It takes 21 days of daily repetition of a new a behavior to become a habit. So select one of the behaviors in the list above to do for 21 days and voila, it will become a habit...and make you happier as a couple. And if you fall off the wagon, don't despair, just apologize to your partner, ask their forgiveness and recommit yourself to getting back in the habit.

Snap 2014-10-10 at 16.23.35

Lenovo's Yoga 3 Pro gets lighter and thinner, adds watchband hinge.

 

The Yoga 3 Pro takes Lenovo's familiar formula, and makes it lighter and thinner

The Yoga 3 Pro takes Lenovo's familiar formula, and makes it lighter and thinner

To say Microsoft's 2-in-1 approach to Windows 8 (and 8.1) didn't go as planned would be a colossal understatement. But it did spawn a few success stories, including the Lenovo Yoga. The company is back with the 2014 version of its Windows flagship, the Yoga 3 Pro.

If you missed the Yoga train the first two times around the bend, here's a quick recap. The 2-in-1 has a hinged design that lets the device rotate 360 degrees. Imagine the last standard laptop you used – only with its screen stretching all the way around, folding into a tablet (with keyboard tucked on its backside).

One of the downsides to this form factor is that, with the keyboard folded back, it's going to be thicker than most standard tablets. The Yoga 2 Pro helped to cut down on that, but the Yoga 3 Pro moves farther in that direction.

Tent: one of the Yoga 3 Pro's multiple modes

When opened, the new model is 12.8 mm (half an inch) thick. That's going to make for a very thin laptop, and a tablet that's 17 percent less beefy than the last Yoga. Just remember, though, that in tablet mode it's going to be twice as thick – making it roughly 241 percent thicker than an iPad Air.

At 1.19 kg (2.62 lb), it's also 14 percent lighter than the Yoga 2 Pro. That is, however, 154 percent heavier than the iPad Air.

Unlike an iPad, though, you get a full desktop operating system (either Windows 8.1 or 8.1 Pro) with an Intel Core M 5Y70 processor (say hello to Broadwell). RAM is set at 8 GB, and you can choose from 256 GB and 512 GB storage options.

Lenovo is pushing the new model's "watchband hinge," which looks about how it sounds. The company says that the hinge is made of over 800 pieces of steel and aluminum, and played a part in shaving so much thickness off of last year's model. It also gives the Yoga 3 Pro six hinge points, instead of the two found in earlier Yogas.

The Yoga 3 Pro will be available soon, starting at US$1,350

The new Yoga retains the same impressive screen specs as we saw in last year's model. The 13-in display has QHD+ (3,200 x 1,800) resolution, which comes out to 276 pixels per inch (PPI). By comparison, the 13-in Retina MacBook, which also has a very sharp display, comes in at "just" 227 PPI.

Starting at US$1,350, though, the Yoga 3 Pro is a bit more expensive than Apple's professional 13-in laptop. The device is up for pre-order now, with Lenovo listing it as shipping "within three weeks."

Product page: Lenovo

 

Super-resolved fluorescence microscopy pioneers awarded 2014 Nobel Prize in Chemistry

 

The prize-winning techniques have removed the theoretical limits of optical microscopes (P...

The prize-winning techniques have removed the theoretical limits of optical microscopes (Photo: Shutterstock)

Ever since Antonie van Leeuwenhoek turned his simple microscope on a bit of pond water in the 17th century, optical microscopes have been a key tool for biologists. Unfortunately, they’re rather limited as to the smallness of what they can see – or at least, they were. This year's winners of the Nobel Laureates in Chemistry, Eric Betzig, Stefan W. Hell and William E. Moerner, changed all that. Their discovery of two methods to bypass the physical limits of optical microscopes led to the creation of the field of nanomicroscopy.

In 1873, Ernst Abbe discovered that there was a seemingly insurmountable limit to how powerful an optical microscope can be. To put it very simply, he found that light can’t get around the bends on tiny objects. According to his calculations, the wavelengths of visible light meant that the maximum limit of the resolution of an optical microscope is 0.2 micrometers, which is about half a wavelength. For over a century, this blocked the lower limit for optical studies, which prevented the direct observation of structures on a nanoscale, such as individual molecules.

What the new Laureates did wasn’t a violation of Abbe’s limit, but more of a workaround. The basic idea was that if light couldn’t be made to bend around nano-sized objects, then the nano-objects could be made to radiate their own light. It’s like the difference between trying to spot a tiny object by shining a giant searchlight on a field, and leaving the field dark but decorating the object with tiny Christmas lights. This insight led to the development of two new methods of what is now called nanomicroscopy.

Diagram of STED microscopy

STED Microscopy

The first method is called STimulated Emission Depletion (STED) microscopy, which was discovered by Stefan Hell in 2000, though he’d been working on the problem since his student days in the 1990s. The key to the method was to avoid the diffraction limit by not just using molecules that fluoresce, but to stimulate them to do so using a laser.

Hell’s technique used fluorescent molecules, such as antibodies that link to specific structures; for example, DNA strands. When these are hit with a pulse of light, they glow in return. Normally, this produces an image like a cluster of glowing wool, but the STED microscope solves this problem by using a pair of lasers. One sets the molecules glowing, and the other eliminates all but the desired molecules from the picture.

As the first laser scans over a specimen and sets the molecules aglow, a second laser follows close behind. This is tuned so that larger molecules absorb the laser light, causing them to discharge their energy and cease glowing. The nano-sized molecules are too small to be affected and keep giving off light. The result is an image of extremely fine detail.

Diagram of single molecule microscopy

Single Molecule Microscopy

The second method is called Single-Molecule Microscopy. It was developed Eric Betzig and William Moerner while working independent of one another, and was used by Betzig for the first time in 2006. It's based on the ability to see a single fluorescent molecule and using this to build up an image of extremely fine detail.

In 1989, W. E. Moerner was the first to measure the light absorption of a single molecule. This was followed in 1997, when Moerner and Roger Tsien at the University of California in San Diego were studying green fluorescent proteins (GFP). Moerner discovered that it was possible to turn this fluorescence of GFPs on and off by using different wavelengths of light. By shining light with a wavelength of 488 nanometers, he could get it to glow, and this would die and not return when shone on again. However, by hitting the molecule with light at 405 nanometers, the molecule would revive and glow again.

This basic idea was expanded upon by Moerner and Betzig independently, to create a new microscopy technique that exploited the on/off system. In this, a specimen would be treated with a number of different fluorescent molecules that bonded to different molecular structures, but always at least 0.2 micrometers apart. Each of these would glow at different times as stimulated. By activating each fluorescent molecule type in turn, then switching them off, a series of different images were produced.

Diagram of the physical limits of microscopy

Diagram of the physical limits of microscopy

These images were then scanned individually and subjected to a statistical algorithm that sharpened each one. When the images were combined in layers, the result was a single image showing complex, high-resolution structures.

Aside from removing the need for a lot of squinting, these methods have removed the theoretical lower limit to optical microscopy. According to the Nobel Foundation, they are already finding applications, such as the ability to look at individual molecules instead of studying "average" molecules in bulks numbering millions. They are currently being used to study synapses in Alzheimer’s and Huntington’s disease, and to gain a better understanding of protein development in embryos.

Source: The Nobel Foundation

 

Megapixel camera? Try gigapixel

 

The camera's resolution is five times better than 20/20 human vision over a 120 degree horizontal field.

The new camera has the potential to capture up to 50 gigapixels of data, which is 50,000 megapixels. By comparison, most consumer cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels. Pixels are individual "dots" of data -- the higher the number of pixels, the better resolution of the image.

The researchers believe that within five years, as the electronic components of the cameras become miniaturized and more efficient, the next generation of gigapixel cameras should be available to the general public.

The camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke's Pratt School of Engineering, along with scientists from the University of Arizona, the University of California -- San Diego, and Distant Focus Corp.

"Each one of the microcameras captures information from a specific area of the field of view," Brady said. "A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later."

"The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras," Brady said. "While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics."

The software that combines the input from the microcameras was developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.

"Traditionally, one way of making better optics has been to add more glass elements, which increases complexity," Gehm said. "This isn't a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive."

"Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements," Gehm said. "A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don't miss anything."

The prototype camera itself is two-and-half feet square and 20 inches deep. Interestingly, only about three percent of the camera is made of the optical elements, while the rest is made of the electronics and processors needed to assemble all the information gathered. Obviously, the researchers said, this is the area where additional work to miniaturize the electronics and increase their processing ability will make the camera more practical for everyday photographers.

"The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating," Brady said, "As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow."

Details of the new camera were published online in the journal Nature. Co-authors of the Nature report with Brady and Gehm include Steve Feller, Daniel Marks, and David Kittle from Duke; Dathon Golish and Estabon Vera from Arizona; and Ron Stack from Distance Focus. The team's research was supported by the Defense Advanced Research Projects Agency (DARPA).

The skin cancer selfie: Gigapixel camera helps diagnose early

 


Melanoma is the fifth most common cancer type in the United States, and it's also the deadliest form of skin cancer, causing more than 75 percent of skin-cancer deaths. If caught early enough though, it is almost always curable. Now a camera, capable of taking snapshots of the entire human body and rendering high-resolution images of a patient's skin may help doctors spot cancer early and save lives.

Developed by a team of researchers at Duke University in North Carolina, USA, the "gigapixel whole-body photographic camera" is essentially three dozen cameras in one, allowing the researchers to image the entire body down to a freckle. The research will be presented at The Optical Society's (OSA) 98th Annual Meeting, Frontiers in Optics, being held Oct. 19-23 in Tucson, Arizona, USA.

"The camera is designed to find lesions potentially indicating skin cancers on patients at an earlier stage than current skin examination techniques," said Daniel Marks, one of the co-authors on the paper. "Normally a dermatologist examines either a small region of the skin at high resolution or a large region at low resolution, but a gigapixel image doesn't require a compromise between the two."

Although whole-body photography has already been used to identify melanomas and exclude non-dangerous "stable" lesions, the approach is typically limited by the resolution of the cameras used. A commercial camera with a wide-angle lens can easily capture an image of a person's entire body, but it lacks the resolution needed for a dermatologist to zoom in on one tiny spot. So dermatologists typically examine suspicious lesions with digital dermatoscopy, a technique to evaluate the colors and microstructures of suspicious skins not visible to the naked eyes. The need for two types of images drives up costs and limits possibilities for telemedicine.

The gigapixel camera developed by the Duke University team solves this problem by essentially combining 34 microcameras into one. With a structure similar to a telescope and its eyepieces, the camera combines a precise but simple objective lens that produces an imperfect image with known irregularities. The 34 microcameras are arranged in a "dome" to correct these aberrations and form a continuous image of the scene. The exposure time and focus for each microcamera can be adjusted independently, and a computer can do a preliminary examination of the images to determine if any areas require future attention by the specialists.

Marks pointed out that although the resolution of the gigapixel camera is not as high as the best dermatoscope, it is significantly better than normal photography, allows for a larger imaging area than a dermatoscope and could be used for telemedicine, which could make the routine screening available to a larger number of people, even in remote locations.

The gigapixel imaging technology is based on the multiscale camera design, which is part of the Defense Advanced Research Projects Agency program "Advanced Wide Field-of-View Architectures for Image Reconstruction and Exploitation."

Though the camera will still have to prove effective in clinical trials before becoming routinely available to patients, the researchers have gathered enough preliminary data on a healthy volunteer to demonstrate that it has adequate resolution and the field of view needed for skin disease screening. The next step, they say, is to test how well it works in the clinic.


Story Source:

The above story is based on materials provided by The Optical Society. Note: Materials may be edited for content and length.


 

Country's economy plays role in Internet file-sharing patterns

 

October 8, 2014

Northwestern University

Peer-to-peer file sharing over the Internet is a popular alternative approach for people worldwide to get the digital content they want. But little is known about these users and systems because data is lacking. Now, in an unprecedented study of BitTorrent users, a research team has discovered two behavior patterns: most users are content specialists -- sharing music but not movies, for example; and users in countries with similar economies tend to download similar types of content.


Peer-to-peer file sharing of movies, television shows, music, books and other files over the Internet has grown rapidly worldwide as an alternative approach for people to get the digital content they want -- often illicitly. But, unlike the users of Amazon, Netflix and other commercial providers, little is known about users of peer-to-peer (P2P) systems because data is lacking.

Now, armed with an unprecedented amount of data on users of BitTorrent, a popular file-sharing system, a Northwestern University research team has discovered two interesting behavior patterns: most BitTorrent users are content specialists -- sharing music but not movies, for example; and users in countries with similar economies tend to download similar types of content -- those living in poorer countries such as Lithuania and Spain, for example, download primarily large files, such as movies.

"Looking into this world of Internet traffic, we see a close interaction between computing systems and our everyday lives," said Luís A. Nunes Amaral, a senior author of the study. "People in a given country display preferences for certain content -- content that might not be readily available because of an authoritarian government or inferior communication infrastructure. This study can provide a great deal of insight into how things are working in a country."

Amaral, a professor of chemical and biological engineering in the McCormick School of Engineering and Applied Science, and Fabián E. Bustamante, professor of electrical engineering and computer science, also at McCormick, co-led the interdisciplinary research team with colleagues from Universitat Rovira i Virgili in Spain.

Their study, published this week by the Proceedings of the National Academy of Sciences (PNAS), reports BitTorrent users in countries with a small gross domestic product (GDP) per capita were more likely to share large files, such as high-definition movies, than users in countries with a large GDP per capita, where small files such as music were shared.

Also, more than 50 percent of users' downloaded content fell into their top two downloaded content types, putting them in the content specialist, not generalist, category.

"Our study serves as a window on society as a whole," Bustamante said. "It was very interesting to see the separations between users based purely on content. Individuals tend to interact only with others who are interested in the same content."

One goal of decentralized peer-to-peer file sharing is to make communication on the Internet more efficient. (In certain parts of the world, BitTorrent users are responsible for up to one-third of the total Internet traffic.) The BitTorrent protocol enables users to share large data files even when they don't have access to broadband connections, which often is the case in rural areas or less developed countries. BitTorrent breaks files into smaller pieces that can be shared quickly and easily from home computers over networks with lower bandwidth.

The researchers analyzed 10,000 anonymous BitTorrent users from around the world during a typical month using data reported by users of the BitTorrent plugin Ono. File content types shared by users included small files, music, TV shows, movies and books. (The type of content was easily determined based on file size.)

The Ono app, developed by Bustamante and his lab, allows users to improve the performance of BitTorrent while reducing the impact of their traffic on Internet network providers. Ono users can give informed consent for research use of their activity, providing a rich source of data on which new studies and projects can be built.


Story Source:

The above story is based on materials provided by Northwestern University. Note: Materials may be edited for content and length.


Journal Reference:

  1. A. Gavalda-Miralles, D. R. Choffnes, J. S. Otto, M. A. Sanchez, F. E. Bustamante, L. A. N. Amaral, J. Duch, R. Guimera. Impact of heterogeneity and socioeconomic factors on individual behavior in decentralized sharing ecosystems. Proceedings of the National Academy of Sciences, 2014; DOI: 10.1073/pnas.1309389111

 

Three Trees

 

750-years-old-sequoia-tree -2

japanese-garden-in-portland - 2

tree-tunnel-california - 2

Celebridades em close-up por Martin Schoeller

 

 

Conheça Martin Schoeller, fotógrafo profissional nascido na Alemanha, que mudou-se para Nova York no ano de 1993, para trabalhar como assistente da famosa fotógrafa Annie Leibovitz, saindo 3 anos depois para seguir sua carreira como freelancer.

Já contribuiu com suas fotografias para revistas como The New Yorker, Outside, Entertainment Weekly, Rolling Stone, GQ, Esquire, Vogue, entre outras, e já teve 3 livros publicados: Close Up: Portraits 1998-2005 (2005), Female Bodybuilders (2008) e Martin Schoeller (2009).

Schoeller é bastante conhecido por seus retratos com bastante iluminação, pano de fundo, pouco brilho e que abusam do close-up, dando real destaque aos detalhes da pessoa ou objeto que estiver fotografando.

Dizem por aí que algumas celebridades vetaram o uso de suas fotografias, depois de verem o resultado final. E você, teria coragem?   (Eu, NÃO.)

Martin Schoeller