Machine-Vision Algorithm Learns to Judge People by Their Faces

Social psychologists have long known that humans make snap judgements about each other based on nothing more than the way we look and, in particular, our faces. We use these judgements to determine whether a new acquaintance is trustworthy or clever or dominant or sociable or humorous and so on.

These decisions may or may not be right and are by no means objective, but they are consistent. Given the same face in the same conditions, people tend to judge it in the same way.

And that raises an interesting possibility. Rapid advances in machine vision and facial recognition have made it straightforward for computers to recognize a wide range of human facial expressions and even to rate faces by attractiveness. So is it possible for a machine to look at a face and get the same first impressions that humans make?

Today, we get an answer thanks to the work of Mel McCurrie at the University of Notre Dame and a few buddies. They’ve trained a machine-learning algorithm to decide whether a face is trustworthy or dominant in the same way as humans do.

Their method is straightforward. The first step in any machine-learning process is to create a data set that the algorithm can learn from. That means a set of pictures of faces labeled with the way people judge them—whether trustworthy, dominant, clever, and so on.

McCurrie and co create this using a website called TestMyBrain.org, a kind of citizen science project that measures various psychological attributes of the people who visit. The site is one of the most popular brain testing sites on the Web, with over 1.6 million participants.

The team asked participants to rate 6,300 black and white pictures of faces. Each face was rated by 32 different people for trustworthiness and dominance and by 15 people for IQ and age.

An interesting feature of these ratings is that there is no objective answer—the test simply records the opinion of the evaluator. Of course, it is possible to measure IQ and age and work out how well people are able to guess these values. But McCurrie and co are not interested in this. All they want to measure is the range of people’s impressions and then train a machine to reproduce the same results.

Having gathered this data, the team used 6,000 of the images to train their machine-vision algorithm. They use a further 200 images to fine-tune the machine-vision parameters. All this trains the machine to judge faces in the same way that humans do.

McCurrie and co save the last 100 images to test the machine-vision algorithm—in other words, to see whether it jumps to the same conclusions that humans do.

The results make for interesting reading. Of course, the machine reproduces the same behavior that it has learned from humans. When presented with a face, the machine gives more or less the same values for trustworthiness, dominance, age, and IQ as a human would.

McCurrie and co are able to tease apart how the machine does this. For example, they can tell which parts of the face the machine is using to make its judgements.

The team does this by covering different parts of a face and asking the machine to make its judgement. If the outcome is significantly different from the usual value, they assume this part of the face must be important. In this way, they can tell which parts of the face the machine relies on most when making its judgement.

Curiously, these turn out to be similar to the parts of the face that humans rely on. Social psychologists know that humans tend to look at the mouth when assessing trustworthiness and that a lowered brow is often associated with dominance.

And these are exactly the areas that the machine-vision algorithm learns to look at from the training data. “These observations indicate that our models have learned to look in the same places that humans do, replicating the way we judge high-level attributes in each other,” say McCurrie and co.

That leads to a number of interesting applications. McCurrie and co first apply it to acting. They use the machine to assess the trustworthiness and dominance of Edward Snowden and Julian Assange from pictures of their faces. They then use the machine to make the same assessment of the actors who play them in two recent moves—Joseph Gordon-Levitt and Benedict Cumberbatch, respectively.

In effect this predicts how a crowd might assess the similarity between an actor and the person he or she portrays.

The results are clear. It turns out that the machine rates both actors in a similar way to the humans they portray—all score poorly in trustworthiness, for example. “Our models output remarkably similar predictions between the subjects and their actors, attesting to the accuracy of the portrayals in the films,” say McCurrie and co.

But the team can go further. They apply the machine-vision algorithm to each frame in a movie, which allows them to see how the ratings change over time. This provides a measure of the way people’s perceptions might change over time. And that’s something that could be used in research, marketing, political campaigning, and so on.

The work also suggests future avenues to pursue. One possibility is to test how first impressions change between cultural or demographic groups.  

All this makes it possible to start teasing apart the factors that contribute to our preconceptions, which often depend on subtle social cues. It may also allow robots to predict and repeat them.

A fascinating corollary to this is how this kind of research could influence human behavior. If somebody discovered that their face is perceived as untrustworthy, how might that person react?  Might it be possible to learn how to change this perception, perhaps by changing facial expressions? Interesting work!

Ref: http://ift.tt/2eHIPQx: Predicting First Impressions with Deep Learning 

 

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2dYEXhh
via IFTTT

How NASA Found The Lost ExoMars Lander So Quickly

Europe’s Schiaparelli Mars lander did not have had the smooth landing its team had hoped for on October 19, but at least it didn’t stay missing for long. A high-resolution camera onboard a NASA satellite discovered the errant spacecraft’s parachute and other fragments within days of impact. But not all lost Mars landers have been found so quickly—another European spacecraft, Beagle-2, disappeared for more than a decade, from the time of its failed landing in 2003 all the way until 2015.

When it comes to misplaced landers, “The protocol [is] basically ‘find it as fast as we can,’ but how fast that is depends on what information is available to us,” says Alfred McEwen, a planetary geologist at the University of Arizona and the principal investigator for NASA’s High Resolution Imaging Science Experiment (HiRISE), a camera on the satellite that spotted Schiaparelli.

These days, the Mars Reconaissance Orbiter, which HiRISE has hitched a ride on, lets scientists sniff out evidence of a missing lander before the trail goes cold. But other pieces of information—such as where a lander winds up, and how well scientists can track its likely landing site—turn out to be important too.

Disappearing acts

The Schiaparelli lander in the European Space Agency’s (ESA) ExoMars mission was intended to test out a new landing strategy and investigate the Red Planet’s atmosphere and surface. It almost made it to the ground unscathed. Schiaparelli detached from its mother ship, the Trace Gas Orbiter, shucked its heat shields as planned and deployed its parachute early but successfully. But its rockets, which were meant to fire for 30 seconds, only ignited for a few moments, meaning it came down way too fast. The European Space Agency lost contact with Schiaparelli shortly before it hit the dirt.

A day later, the low-resolution Context Camera on NASA’s Mars Reconnaissance Orbiter spotted the remains of Schiaparelli’s troubled descent. And on October 25, the HiRISE camera captured a closer look at three sites associated with Schiaparelli’s impact: a shallow crater made when the lander plowed into the Martian surface, the front heat-shield, and the parachute and rear heat shield. “In this case, we see the three things we expect to see, they’re distinct, and it’s where we expect to find them,” McEwen says.

Beagle-2 wasn’t so lucky. ESA lost contact with Beagle-2 in 2003 after it detached from its mother ship, Mars Express. It wasn’t until 2015 that Beagle-2 resurfaced, spied just three miles or so from the center of its expected landing zone by the HiRISE camera. Pictures shot from different angles revealed reflections off its tilted solar panels. “We saw bright spots that changed from image to image, and that was a pretty strong clue that we were looking at something unnaturally flat,” McEwen says.

Why the holdup?

One reason Beagle-2 stayed out of sight for so long is that the Mars Reconnaissance Orbiter didn’t reach the red planet until 2006. So it couldn’t start to hunt until several years after Beagle-2 vanished. And the older a landing site is, the harder it is to distinguish. Over time, dust settles on the surfaces of a spacecraft, making it less reflective and harder to see in photos. “In the case of Schiaparelli we see this very bright parachute…because it’s so fresh it’s really bright, brighter than anything else you can see on Mars except for polar frost and ice,” McEwan says.

The other key difference between Beagle-2 and Schiaparelli is that the older spacecraft stopped relaying information long before it touched down, making it impossible to track. “Whereas for Schiaparelli, there was information transmitted from it, including its location, all the way down to a point still above the surface, but much closer to where it ended up,” McEwan says. “We knew pretty much exactly where it should have come down, so we were able to target right to that location.”

Beagle-2’s resting place was eventually discovered. Other Mars landers are still missing, such as NASA’s Mars Polar Lander, which was lost in 1999. Seasonal winds compound the dusty cloak worn by landers that disappear in Mars’s arctic regions. “This is a high latitude, and that’s especially hard because the seasonal polar cap deposits dust and just completely wipes out any albedo [reflective] markings every year,” McEwan says.

Even the landing site from NASA’s Phoenix spacecraft, which successfully touched down in Mars’s northern polar region in 2008, would have become unrecognizable if we didn’t know where to look, McEwen says. “Knowing where it is, we could say, ah that little bump is the lander, but we would never be able to distinguish that from a rock if we didn’t already know where it was.”

The new images from Schiaparelli’s collision may help ESA piece together what went wrong, and how to avoid a repeat on future missions. But Schiaparelli’s well-documented landing might also help us find other missing spacecraft. “Maybe seeing what happened here will provide some clues,” McEwen says. “Maybe we should be looking more for shallow craters.”

from Popular Science – New Technology, Science News, The Future Now http://ift.tt/2f98zpQ
via IFTTT

You can now legally hack your own car or smart TV

Researchers can now probe connected devices, computers and cars for security vulnerabilities without risking a lawsuit. Last Friday, the FTC authorized changes to the Digital Millennium Copyright Act (DMCA) that will allow Americans to do hack their own electronic devices. Researchers can lawfully reverse engineer products and consumers can repair their vehicle’s electronics, but the FTC is only allowing the exemptions for a two-year trial run.

The FTC and US Library of Congress enacted similar legislation in 2014 that allows you to unlock your own smartphone. Until today, however, it was illegal to mess with the programs in your car, thermostat or tractor, thanks to strict provisions in the DMCA’s Section 1201. That applied even to researchers probing the device security for flaws, a service that helps both the public and manufacturers. For example, researchers commandeered a Jeep on the road to show it could be done, an act that was technically illegal.

You could have also been sued just for trying to repair your own electronics. In a well-publicized example, John Deere told farmers that they have no right to root around in the software that runs their tractor even when they’re just trying to fix the damned thing. That issue alone prompted over 40,000 public comments to the US Copyright Office demanding stronger ownership rights.

DMCA 1201, and the rulemaking process, create unconstitutional restraints on speech, and need to be struck down by a court or fixed by Congress.

The exemptions have certain restrictions — consumers are only allowed to do "good-faith" hacking on "lawfully-acquired" devices. That means, for instance, that you can still get in trouble if you gain unauthorized access to a device you don’t own. Also, researchers can’t probe internet services or public services like airlines either, meaning that the jet hack done last year would still be illegal now.

Groups like the Electronic Frontier Foundation, iFixit and Repair.org fought to have research and repair activities exempted from the DMCA, since they actually have nothing to do with copyright law. "You could be sued or even jailed for trying to understand the software in your devices, or for helping others do the same," the EFF wrote.

The new exemptions are nice, but critics are still fuming over the fact that they took a year to kick in and are only good for two years. Repair and research advocates say that the process for changing copyright law is unnecessarily expensive and onerous, too. "The one year delay … was not only a violation of law, not only pointless, but actively counterproductive," the EFF wrote. "DMCA 1201, and the rulemaking process, create unconstitutional restraints on speech and need to be struck down by a court or fixed by Congress."

Source: FTC

from Engadget http://ift.tt/2evPeNS
via IFTTT

The Clever Pen on a Mission to Finally Kill the Tape Measurer

Mladen Barbaric is sick of tape measurers. They’re unruly and awkward. Using them on curved surfaces like trying to gift wrap a golf club. “Tape measurers haven’t changed since 1869,” he says. As founder of design studio Pearl and the industrial designer behind the fitness-tracking Misfit Shine, the seizure-detecting Empatica Embrace, and a host of other products, Barbaric would know. Despite working in a mostly digital environment, most of the tools he uses while designing—rulers, calipers, and, yes, tape measurers—are still analog.

“Tools today outside of the computer and phone are not really well thought out,” he says. “They’re just archaic.” But Barbaric has a solution to old-school measuring tools. He’s calling it the InstruMMent 01. The multi-purpose gadget (now raising funds on Indiegogo) looks like a pen and works like a handheld surveyor’s wheel. At one end is your choice of a pen, pencil, or stylus; at the other is a black rubberized wheel designed to roll over flat and curved surfaces. As the wheel turns, sensors inside the gadget record the distance it travels in 0.1 mm increments. A laser at the tip of the roller help you gauge, visually, where your measurement begins and ends.

The choice to make the object roll wasn’t initially obvious. “It took us a little while to figure out what the optimal form factor was,” Barbaric says. He knew the object had to be small and be able to measure curved surfaces as well as flat ones. For Barbaric, his ideal measuring tool could be held in one hand, fit in a laptop bag, and pull double duty (hence the pen). It ended up that the object’s rolling mechanism solved for all of these considerations, while also making it more adaptable and accurate than a standard tape measurer.

It also does more work than a tape measurer. For starters, you can capture the dimensions of basically any object, straight or curved, and send it wirelessly to your phone, where it appears as a card with a photo and description. From the accompanying app, you can convert imperial units to metric (or vice versa), and translate the scaled quantities on a map or drawing to real-world units—a handy feature for anyone who works from blueprints. And you can program the tool’s laser to blink at predetermined increments, to help space measurements equally. (Barbaric hints that the device is capable of more, and could acquire more feature in the future, but was tight-lipped about what those features might be.)

Barbaric designed the 01 to make designers’ lives easier, and it does. The tool packs a lot of functionality into a small, attractive package that its intended audience will certainly find useful. But Barbaric, perhaps unsurprisingly, is finding that the device has applications outside the studio. “I’ve seen guys have bicep competitions,” he says. “And the first thing my wife did was measure our kid.”

Go Back to Top. Skip To: Start of Article.

from Wired Top Stories http://ift.tt/2eQeH7M
via IFTTT