FBI tells router users to reboot now to kill malware infecting 500k devices

FBI tells router users to reboot now to kill malware infecting 500k devices

https://ift.tt/2ILS198

Linksys

The FBI is advising users of consumer-grade routers and network-attached storage devices to reboot them as soon as possible to counter Russian-engineered malware that has infected hundreds of thousands devices.

Tech

via Ars Technica https://arstechnica.com

May 25, 2018 at 01:26PM

Alexa’s recording snafu was improbable, but inevitable

Alexa’s recording snafu was improbable, but inevitable

https://ift.tt/2LuFuV6


Engadget

Amazon’s Alexa recently made headlines for one of the strangest consumer AI mistakes we’ve ever heard of: A family in Portland, Oregon claims that the company’s virtual assistant recorded a conversation and sent it to a seemingly random person in the husband’s contact list. Alexa didn’t just make one slip-up — it made several that, when combined, led to a pretty remarkable breach of privacy. The company’s explanation, provided to news outlets yesterday, makes clear just how unlikely this whole situation was:

“Echo woke up due to a word in background conversation sounding like ‘Alexa,'” the statement reads. “Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as ‘right’.”

That is, without question, absolutely wild. Given a handful of factors at play here, though, it was likely inevitable that Alexa would’ve goofed spectacularly at some point. I’m not a betting man, but let’s look at the numbers: Right after Christmas, Amazon confirmed that it has sold “tens of millions” of Alexa-enabled devices around the world. New research indicates that Google has for the first time overtaken Amazon as the world’s premier purveyor of smart speakers, but no matter — people are or were talking to at least 20 million Alexa devices around the world. That amounts to a huge number of interactions for Alexa to interpret every day, and it was only a matter of time before the right set of circumstances produced a situation that Alexa just couldn’t handle.

Alexa’s cascading failure here isn’t simply due to a numbers game, either. It’s also because Alexa can be lousy at its job. Looking back through my own Alexa history — which contains recordings of every interaction I’ve ever had with it — reveals a handful of false positives that shouldn’t have triggered the assistant in the first place. In some cases, a droning voice on TV said a word that kinda-sorta sounded like “Alexa,” which prompted the assistant to try and interpret what else the person was saying. In others, the recording stored by Amazon didn’t include the Alexa wake word at all, leaving me perplexed as to why Alexa was trying to listen in the first place. It probably won’t come as a surprise that most of the recordings that lacked an audible “Alexa” were snippets from a television show or a conversation that was never meant for Amazon to hear.

Even now, Alexa is still a more mysterious figure in my life than I’d like. It once laughed at me out of nowhere in the middle of the night, a profoundly creepy feat that very nearly made me hurl my Echo out a window. My stored history also doesn’t include the handful of times when I’ve seen my Echo light up blue out of the corner of my eye. Alexa’s virtual ears clearly perked up, but the assistant never bothered to respond. Since Amazon’s Alexa history only seems to keep records of interactions where Alexa offers a verbal response, I can’t fully explain what’s going on in those moments when Alexa is triggered but remains silent. (Maybe it was one of those silent, Alexa-triggering signals we’ve known about for months.)

Considering the number of accidental triggers and responses in my history, it’s not hard to imagine how the right kind of conversation could have prompted Alexa to send a recording to a random contact. As Amazon says, this was incredibly unlikely, but as long as Alexa remains aggressive in attempting to pull signals from noise, these situations will never be completely impossible.

Amazon has said that it’s working on ways to make these kinds of situations even less likely, a tacit admission that Alexa still needs work. Even that may be an understatement. Through the process of recording a family and sending that recording to someone else, Alexa was doing exactly what it was designed to: It listened for signals regardless of their origin and took action based on those signals. Had Alexa been able to more fully understand what was being said in that conversation, it’s likely this whole thing would’ve never have happened. While Alexa has become one of the dominant voice assistants out there, it is in some ways surprisingly unsophisticated, and the only way to prevent these situations from happening again is to make Alexa smarter. Amazon is clearly keen to take on the task, but until the company’s engineers push some new boundaries, don’t be surprised if Alexa continues to surprise with its occasional incompetence.

Tech

via Engadget http://www.engadget.com

May 25, 2018 at 01:06PM

Uber’s Self-Driving Car Saw the Woman It Killed, Report Says

Uber’s Self-Driving Car Saw the Woman It Killed, Report Says

https://ift.tt/2ILEwX7

The federal investigators examining Uber’s fatal self-driving crash in March released a preliminary report this morning. It lays out the facts of the collision that killed a pedestrian in Tempe, Arizona, and explains what the car actually saw that night.

The National Transportation Safety Board won’t determine the cause of the crash or issue safety recommendations to stop others from happening until it releases its final report, but this first look makes two things clear: Engineering a car that drives itself is very hard. And any self-driving car developer that is currently relying on a human operator to monitor its testing systems—to keep everyone on the road safe—should be extraordinarily careful about the design of that system.

The report says that the Uber car, a modified Volvo XC90 SUV, had been in autonomous mode for 19 minutes and was driving at about 40 mph when it hit 49-year-old Elaine Herzberg as she was walking her bike across the street. The car’s radar and lidar sensors detected Herzberg about six seconds before the crash, first identifying her as an unknown object, then as a vehicle, and then as a bicycle, each time adjusting its expectations for her path of travel.

About a second before impact, the report says, “the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision.” Uber, however, does not allow its system to make emergency braking maneuvers on its own. Rather than risk “erratic vehicle behavior”—like slamming on the brakes or swerving to avoid a plastic bag—Uber relies on its human operator to watch the road and take control when trouble arises.

Furthermore, Uber had turned off the Volvo’s built-in automatic emergency braking system to avoid clashes with its own tech. This is standard practice, experts say. “The vehicle needs one master,” says Raj Rajkumar, an electrical engineer who studies autonomous systems at Carnegie Mellon University. “Having two masters could end up triggering conflicting commands.” But that works a lot better when the master of the moment works the way it’s meant to.

1.3 seconds before hitting Elaine Herzberg, Uber’s car decided emergency braking was necessary—but didn’t have the ability to do that on its own. The yellow bands show distance in meters, and the purple indicates the car’s path.

NTSB

The Robot and the Human

These details of the fatal crash point to at least two serious flaws in Uber’s self-driving system: software that’s not yet ready to replace humans, and humans who were ill-equipped to keep their would-be replacements from doing harm.

Today’s autonomous systems rely on machine learning: They “learn” to classify and respond to situations based on datasets of images and behaviors. The software is shown thousands of images of a cyclist, or a skateboarder, or an ambulance, until it learns to identify those things on its own. The problem is that it’s hard to find images of every sort of situation that could happen in the wild. Can the system distinguish a tumbleweed from a toddler? A unicyclist from a cardboard box? In some of these situations, it should be able to predict the object’s movements, and respond accordingly. In others, the vehicle should ignore the tumbleweed, refrain from a sudden, dangerous braking action, and keep on rolling.

Herzberg, walking a bike loaded with plastic bags and moving perpendicular to the car, outside the crosswalk and in a poorly lit spot, challenged Uber’s system. “This points out that, a) classification is not always accurate, which all of us need to be aware of,” says Rajkumar. “And b) Uber’s testing likely did not have any, or at least not many, images of pedestrians with this profile.”

Solving this problem is a matter of capturing all the strange, unpredictable edge cases on public roads, and figuring out how to train systems to deal with them. It’s the engineering problem at the heart of this industry. It’s supposed to be hard. The car won’t get it right every time, especially not in these early days.

That’s why Uber relied on human safety drivers. And it’s what makes the way they structured their program troubling. At the time, the company’s human operators were paid about $24 an hour (and given plenty of energy drinks and snacks) to work eight-hour shifts behind the wheel. They were told what routes to drive, and what to expect from the software. Above all, they were instructed to keep their eyes on the road at all times, to remain ready to grab the wheel or stomp the brakes. Uber has caught (and fired) drivers who were looking at their phones while on the job—and that shouldn’t surprise anybody.

“We know that drivers, that humans in general, are terrible overseers of highly automated systems,” says Bryan Reimer, a engineer who studies human-machine interaction at MIT. “We’re terrible supervisors. The aviation industry, the nuclear power industry, the rail industry have shown this for decades.”

Yet Uber placed the burden for preventing crashes on the preoccupied shoulders of humans. That’s the tremendous irony here: In its quest to eliminate the humans who cause more than 90 percent of American crashes, which kill about 40,000 people every year, Uber hung its safety system on the ability of a particular human to be perfect.

There other ways to test potentially life-saving tech. Some autonomous developers require two people in every testing vehicle, one to sit behind the wheel and another to take notes on specific events and system failures during the drive. (Uber originally had two operators in each car, but switched to solo drivers late last year.) The safety driver behind the wheel of the crashed Uber told NTSB investigators she wasn’t watching the road in the moments leading up to the collision because she was looking at the car’s interface—which is built into the center console, outside a driver’s natural line of sight. If another human were handling that job, which includes noting observations about the car’s behavior, the person behind the wheel might have spotted Herzberg—and saved her life.

Or, Uber could have given its system the ability to monitor a driver’s attentiveness to the road, and emit a beep or a buzz if it discovers the person behind the wheel isn’t staying on task. Cadillac’s semi-autonomous SuperCruise system uses an infrared camera on the steering column to watch a driver’s head position, and issue warnings when they look away from the road for too long.

Uber’s system didn’t even have a way to alert the driver when it determined emergency braking was necessary, the report says. Many cars on the market today can detect imminent collisions, and alert the driver with flashing red lights or loud beeping. That sort of feature could have helped here. “That is kind of mind-boggling, that the vehicle system did nothing and they had to depend entirely on the driver,” says Steven Shladover, a UC Berkeley research engineer who has spent decades studying automated systems.

Uber says it’s working on its “safety culture,” and has not yet resumed testing, which it paused after the crash. “Over the course of the last two months, we’ve worked closely with the NTSB,” a spokesperson said in a statement. “As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program.” The company hired former NTSB chair and aviation expert Christopher Hart earlier this month to advise it on safety systems.

Whatever changes Uber makes, they won’t appear in Tempe anytime soon. The company plans to resume testing in Pittsburgh this summer, home to its R&D center. But it’s shutting down its Arizona operation altogether. The move had very human consequences. Uber laid off about 300 workers in the state—many of them safety drivers.

Tech

via Wired Top Stories https://ift.tt/2uc60ci

May 24, 2018 at 02:48PM