Facebook wants machines to see the world through our eyes

https://www.technologyreview.com/2021/10/14/1037043/facebook-machine-learning-ai-vision-see-world-human-eyes/

We take it for granted that machines can recognize what they see in photos and videos. That ability rests on large data sets like ImageNet, a hand-curated collection of millions of photos used to train most of the best image-recognition models of the last decade. 

But the images in these data sets portray a world of curated objects—a picture gallery that doesn’t capture the mess of everyday life as humans experience it. Getting machines to see things as we do will take a wholly new approach. And Facebook’s AI lab wants to take the lead.

It is kick-starting a project, called Ego4D, to build AIs that can understand scenes and activities viewed from a first-person perspective—how things look to the people involved, rather than to an onlooker. Think motion-blurred GoPro footage taken in the thick of the action, instead of well-framed scenes taken by someone on the sidelines. Facebook wants Ego4D to do for first-person video what ImageNet did for photos.  

For the last two years, Facebook AI Research (FAIR) has worked with 13 universities around the world to assemble the largest ever data set of first-person video—specifically to train deep-learning image-recognition models. AIs trained on the data set will be better at controlling robots that interact with people, or interpreting images from smart glasses. “Machines will be able to help us in our daily lives only if they really understand the world through our eyes,” says Kristen Grauman at FAIR, who leads the project.

Such tech could support people who need assistance around the home, or guide people in tasks they are learning to complete. “The video in this data set is much closer to how humans observe the world,” says Michael Ryoo, a computer vision researcher at Google Brain and Stony Brook University in New York, who is not involved in Ego4D.

But the potential misuses are clear and worrying. The research is funded by Facebook, a social media giant that has recently been accused in the US Senate of putting profits over people’s well-being—as corroborated by MIT Technology Review’s own investigations.

The business model of Facebook, and other Big Tech companies, is to wring as much data as possible from people’s online behavior and sell it to advertisers. The AI outlined in the project could extend that reach to people’s everyday offline behavior, revealing what objects are around your home, what activities you enjoyed, who you spent time with, and even where your gaze lingered—an unprecedented degree of personal information.

“There’s work on privacy that needs to be done as you take this out of the world of exploratory research and into something that’s a product,” says Grauman. “That work could even be inspired by this project.”

FACEBOOK

The biggest previous data set of first-person video consists of 100 hours of footage of people in the kitchen. The Ego4D data set consists of 3,025 hours of video recorded by 855 people in 73 different locations across nine countries (US, UK, India, Japan, Italy, Singapore, Saudi Arabia, Colombia, and Rwanda).

The participants had different ages and backgrounds; some were recruited for their visually interesting occupations, such as bakers, mechanics, carpenters, and landscapers.

Previous data sets typically consisted of semi-scripted video clips only a few seconds long. For Ego4D, participants wore head-mounted cameras for up to 10 hours at a time and captured first-person video of unscripted daily activities, including walking along a street, reading, doing laundry, shopping, playing with pets, playing board games, and interacting with other people. Some of the footage also includes audio, data about where the participants’ gaze was focused, and multiple perspectives on the same scene. It’s the first data set of its kind, says Ryoo.

FAIR has also launched a set of challenges that it hopes will focus other researchers’ efforts on developing this kind of AI. The team anticipates algorithms built into smart glasses, like Facebook’s recently announced Ray-Bans, that record and log the wearers’ day-to-day lives. It means that augmented- or virtual-reality “metaverse” apps could, in theory, answer questions like “Where are my car keys?” or “What did I eat and who did I sit next to on my first flight to France?” Augmented-reality assistants could understand what you’re trying to do and offer instructions or useful social cues.

It’s sci-fi stuff, but closer than you think, says Grauman. Large data sets accelerate the research. “ImageNet drove some big advances in a short time,” she says. “We can expect the same for Ego4D, but for first-person views of the world instead of internet images.”

Once the footage was collected, crowdsourced workers in Rwanda spent a total of 250,000 hours watching the thousands of video clips and writing millions of sentences that describe the scenes and activities filmed. These annotations will be used to train AIs to understand what they are watching.

Where this tech ends up and how quickly it develops remain to be seen. FAIR is planning a competition based on its challenges in June 2022. It is also important to note that FAIR, the research lab, is not the same as Facebook, the media megalodon. In fact, insiders say that Facebook has ignored technical fixes that FAIR has come up with for its toxic algorithms. But Facebook is paying for the research, and it is disingenuous to pretend the company is not very interested in its application.

Sam Gregory at Witness, a human rights organization that specializes in video technology, says this technology could be useful for bystanders documenting protests or police abuse. But he thinks those benefits are outweighed by concerns around commercial applications. He notes that it is possible to identify individuals from how they hold a video camera. Gaze data would be even more revealing: “It’s a very strong indicator of interest,” he says. “How will gaze data be stored? Who will it be accessible to? How might it be processed and used?” 

“Facebook’s reputation and core business model ring a lot of alarm bells,” says Rory Mir at the Electronic Frontier Foundation. “At this point many are aware of Facebook’s poor track record on privacy, and their use of surveillance to influence users—both to keep users hooked and to sell that influence to their paying customers, the advertisers.” When it comes to augmented and virtual reality, Facebook is seeking a competitive advantage, he says: “Expanding the amount and types of data it collects is essential.”

When asked about its plans, Facebook was unsurprisingly tight-lipped: “Ego4D is purely research to promote advances in the broader scientific community,” says a spokesperson. “We don’t have anything to share today about product applications or commercial use.”

via Technology Review Feed – Tech Review Top Stories https://ift.tt/1XdUwhl

October 14, 2021 at 07:05AM

Driver-assistance technology can fail in rain, leading to crashes in AAA study

https://www.autoblog.com/2021/10/15/driver-assistance-systems-rain-crashes-aaa-study/


Human drivers don’t necessarily see the road ahead as well when it rains, and it turns out that driver-assistance technology doesn’t either. The systems used to help your car automatically brake and stay within its lane is significantly impaired by rain, according to a study by AAA released Thursday.

American Automobile Association researchers found that automatic emergency braking, in several instances during testing conducted in simulated moderate-to-heavy rainfall, failed to detect stopped vehicles ahead, resulting in crashes. Lane-keeping technology also faired badly. 

AAA cautioned drivers, who should always be vigilant of these systems even in ideal conditions, to not rely on them in the rain.

“Vehicle safety systems rely on sensors and cameras to see road markings, other cars, pedestrians and roadway obstacles. So naturally, they are more vulnerable to environmental factors like rain,” said Greg Brannon, AAA’s director of automotive engineering and industry relations. “The reality is people aren’t always driving around in perfect, sunny weather so we must expand testing and take into consideration things people actually contend with in their day-to-day driving.”

Advanced driver-assistance systems, or ADAS, are common in newer vehicles. They do not perform autonomous driving, but they can automate some limited driving tasks such as adaptive cruise control and staying centered in one’s lane. Auto emergency braking has been shown to significantly reduce rear-end crashes in tests by insurance groups.

For its tests, AAA employed a 2020 Buick Enclave Avenir, a 2020 Hyundai Santa Fe, a 2020 Toyota RAV4 and a 2020 Volkswagen Tiguan.

No test car crashed into a stopped vehicle under dry, ideal conditions. But then researchers turned on the simulated rainfall — and 17% of the test runs resulted in crashes at speeds of 25 mph (40 km/h). At 35 mph, the instances of crashes increased to 33%.

You can extrapolate from there as to the dangers at highway speeds.

Researchers simulated rainfall in the vehicles’ field of vision by using a device involving a spray nozzle that obscured the sensors in the windshield, as shown in the photo above. That way, they were able to keep the roadway dry. AAA noted that wet roads in real driving conditions could result in even higher crash rates.

As for lane-keeping technology, vehicles crossed lane markers 37% of the time during ideal conditions in the AAA test — and that rate jumped to 69% once rain was added.

This is not the first AAA study to note shortcomings in driver-assistance systems. On the bright side, this study noted that merely having a dirty or bug-spattered windshield had little effect on the systems’ sensors.

AAA emphasizes that while these systems have potential, they are no match for an attentive driver. For driving in rain, AAA offers these tips:

  • Keep windshield clean and ensure that wipers are not streaking the windshield.
  • Slow down and avoid hard braking and sharp turning. If possible, follow in the tracks of other vehicles.
  • Increase following distance to 5-6 seconds behind the vehicle ahead.
  • Do not use cruise control in order to stay alert and to respond quickly if the car’s tires lose traction with the road.
  • If the car begins to hydroplane, ease off the accelerator to gradually decrease speed until the tires regain traction, and continue to look and steer where you want to go. Don’t jam on the brakes—this can cause further traction loss.

Reuters was used in this report.

 

 

via Autoblog https://ift.tt/1afPJWx

October 15, 2021 at 09:57AM