Russian unit, GRU officer linked to 2014 shoot-down of airliner over Ukraine

Russian unit, GRU officer linked to 2014 shoot-down of airliner over Ukraine

Enlarge /

Eliot Higgins (C), founder of online investigation group Bellingcat, addresses a press conference on findings within research on Malaysia Airlines flight MH17 in Scheveningen, The Netherlands, on May 25, 2018. – The Netherlands and Australia on May 25 accused Moscow of being behind the 2014 shooting down of flight MH17 over war-torn eastern Ukraine with the loss of 298 lives, in a move which may trigger legal action.

REMKO DE WAAL / Getty Images

Officials from the Netherlands and Australia today formally stated that they are convinced Russia was responsible for the deployment of the “Buk” anti-aircraft missile system that shot down Malaysian Airlines Flight 17 (MH17) in 2014.  The announcement came a day after a Dutch-led joint investigation team released a report on their findings, which concluded the missile had belonged to the Russian Army’s 53rd anti-aircraft brigade, which was based outside the city of Kursk, north of the Ukrainian border.

Physical evidence collected by investigators, along with radar track and flight recorder data, pointed to the use of a specific warhead type associated with Buk surface-to-air missiles. Paint transferred from fragments of the missile to the aircraft’s fuselage was matched with recovered parts of the missile.

Russia has long denied that any of its military equipment ever crossed the border into eastern Ukraine, and the Russians presented several alternative scenarios—including blaming the downing of the airliner on a Ukrainian Air Force pilot. The Russians at first claimed to have radar evidence proving their allegation, but the country then said it was lost—only to claim they had found the evidence again just two days before the Joint Investigative Team’s 2016 press conference. The separate target that Russia claimed to have identified on radar was actually part of MH17’s  fuselage breaking away after the missile detonated.


via Ars Technica

May 25, 2018 at 04:49PM

Posted in Family | Tagged , | Leave a comment

FBI tells router users to reboot now to kill malware infecting 500k devices

FBI tells router users to reboot now to kill malware infecting 500k devices


The FBI is advising users of consumer-grade routers and network-attached storage devices to reboot them as soon as possible to counter Russian-engineered malware that has infected hundreds of thousands devices.


via Ars Technica

May 25, 2018 at 01:26PM

Posted in Family | Tagged , | Leave a comment

Alexa’s recording snafu was improbable, but inevitable

Alexa’s recording snafu was improbable, but inevitable


Amazon’s Alexa recently made headlines for one of the strangest consumer AI mistakes we’ve ever heard of: A family in Portland, Oregon claims that the company’s virtual assistant recorded a conversation and sent it to a seemingly random person in the husband’s contact list. Alexa didn’t just make one slip-up — it made several that, when combined, led to a pretty remarkable breach of privacy. The company’s explanation, provided to news outlets yesterday, makes clear just how unlikely this whole situation was:

“Echo woke up due to a word in background conversation sounding like ‘Alexa,'” the statement reads. “Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as ‘right’.”

That is, without question, absolutely wild. Given a handful of factors at play here, though, it was likely inevitable that Alexa would’ve goofed spectacularly at some point. I’m not a betting man, but let’s look at the numbers: Right after Christmas, Amazon confirmed that it has sold “tens of millions” of Alexa-enabled devices around the world. New research indicates that Google has for the first time overtaken Amazon as the world’s premier purveyor of smart speakers, but no matter — people are or were talking to at least 20 million Alexa devices around the world. That amounts to a huge number of interactions for Alexa to interpret every day, and it was only a matter of time before the right set of circumstances produced a situation that Alexa just couldn’t handle.

Alexa’s cascading failure here isn’t simply due to a numbers game, either. It’s also because Alexa can be lousy at its job. Looking back through my own Alexa history — which contains recordings of every interaction I’ve ever had with it — reveals a handful of false positives that shouldn’t have triggered the assistant in the first place. In some cases, a droning voice on TV said a word that kinda-sorta sounded like “Alexa,” which prompted the assistant to try and interpret what else the person was saying. In others, the recording stored by Amazon didn’t include the Alexa wake word at all, leaving me perplexed as to why Alexa was trying to listen in the first place. It probably won’t come as a surprise that most of the recordings that lacked an audible “Alexa” were snippets from a television show or a conversation that was never meant for Amazon to hear.

Even now, Alexa is still a more mysterious figure in my life than I’d like. It once laughed at me out of nowhere in the middle of the night, a profoundly creepy feat that very nearly made me hurl my Echo out a window. My stored history also doesn’t include the handful of times when I’ve seen my Echo light up blue out of the corner of my eye. Alexa’s virtual ears clearly perked up, but the assistant never bothered to respond. Since Amazon’s Alexa history only seems to keep records of interactions where Alexa offers a verbal response, I can’t fully explain what’s going on in those moments when Alexa is triggered but remains silent. (Maybe it was one of those silent, Alexa-triggering signals we’ve known about for months.)

Considering the number of accidental triggers and responses in my history, it’s not hard to imagine how the right kind of conversation could have prompted Alexa to send a recording to a random contact. As Amazon says, this was incredibly unlikely, but as long as Alexa remains aggressive in attempting to pull signals from noise, these situations will never be completely impossible.

Amazon has said that it’s working on ways to make these kinds of situations even less likely, a tacit admission that Alexa still needs work. Even that may be an understatement. Through the process of recording a family and sending that recording to someone else, Alexa was doing exactly what it was designed to: It listened for signals regardless of their origin and took action based on those signals. Had Alexa been able to more fully understand what was being said in that conversation, it’s likely this whole thing would’ve never have happened. While Alexa has become one of the dominant voice assistants out there, it is in some ways surprisingly unsophisticated, and the only way to prevent these situations from happening again is to make Alexa smarter. Amazon is clearly keen to take on the task, but until the company’s engineers push some new boundaries, don’t be surprised if Alexa continues to surprise with its occasional incompetence.


via Engadget

May 25, 2018 at 01:06PM

Posted in Family | Tagged , | Leave a comment

Uber’s Self-Driving Car Saw the Woman It Killed, Report Says

Uber’s Self-Driving Car Saw the Woman It Killed, Report Says

The federal investigators examining Uber’s fatal self-driving crash in March released a preliminary report this morning. It lays out the facts of the collision that killed a pedestrian in Tempe, Arizona, and explains what the car actually saw that night.

The National Transportation Safety Board won’t determine the cause of the crash or issue safety recommendations to stop others from happening until it releases its final report, but this first look makes two things clear: Engineering a car that drives itself is very hard. And any self-driving car developer that is currently relying on a human operator to monitor its testing systems—to keep everyone on the road safe—should be extraordinarily careful about the design of that system.

The report says that the Uber car, a modified Volvo XC90 SUV, had been in autonomous mode for 19 minutes and was driving at about 40 mph when it hit 49-year-old Elaine Herzberg as she was walking her bike across the street. The car’s radar and lidar sensors detected Herzberg about six seconds before the crash, first identifying her as an unknown object, then as a vehicle, and then as a bicycle, each time adjusting its expectations for her path of travel.

About a second before impact, the report says, “the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision.” Uber, however, does not allow its system to make emergency braking maneuvers on its own. Rather than risk “erratic vehicle behavior”—like slamming on the brakes or swerving to avoid a plastic bag—Uber relies on its human operator to watch the road and take control when trouble arises.

Furthermore, Uber had turned off the Volvo’s built-in automatic emergency braking system to avoid clashes with its own tech. This is standard practice, experts say. “The vehicle needs one master,” says Raj Rajkumar, an electrical engineer who studies autonomous systems at Carnegie Mellon University. “Having two masters could end up triggering conflicting commands.” But that works a lot better when the master of the moment works the way it’s meant to.

1.3 seconds before hitting Elaine Herzberg, Uber’s car decided emergency braking was necessary—but didn’t have the ability to do that on its own. The yellow bands show distance in meters, and the purple indicates the car’s path.


The Robot and the Human

These details of the fatal crash point to at least two serious flaws in Uber’s self-driving system: software that’s not yet ready to replace humans, and humans who were ill-equipped to keep their would-be replacements from doing harm.

Today’s autonomous systems rely on machine learning: They “learn” to classify and respond to situations based on datasets of images and behaviors. The software is shown thousands of images of a cyclist, or a skateboarder, or an ambulance, until it learns to identify those things on its own. The problem is that it’s hard to find images of every sort of situation that could happen in the wild. Can the system distinguish a tumbleweed from a toddler? A unicyclist from a cardboard box? In some of these situations, it should be able to predict the object’s movements, and respond accordingly. In others, the vehicle should ignore the tumbleweed, refrain from a sudden, dangerous braking action, and keep on rolling.

Herzberg, walking a bike loaded with plastic bags and moving perpendicular to the car, outside the crosswalk and in a poorly lit spot, challenged Uber’s system. “This points out that, a) classification is not always accurate, which all of us need to be aware of,” says Rajkumar. “And b) Uber’s testing likely did not have any, or at least not many, images of pedestrians with this profile.”

Solving this problem is a matter of capturing all the strange, unpredictable edge cases on public roads, and figuring out how to train systems to deal with them. It’s the engineering problem at the heart of this industry. It’s supposed to be hard. The car won’t get it right every time, especially not in these early days.

That’s why Uber relied on human safety drivers. And it’s what makes the way they structured their program troubling. At the time, the company’s human operators were paid about $24 an hour (and given plenty of energy drinks and snacks) to work eight-hour shifts behind the wheel. They were told what routes to drive, and what to expect from the software. Above all, they were instructed to keep their eyes on the road at all times, to remain ready to grab the wheel or stomp the brakes. Uber has caught (and fired) drivers who were looking at their phones while on the job—and that shouldn’t surprise anybody.

“We know that drivers, that humans in general, are terrible overseers of highly automated systems,” says Bryan Reimer, a engineer who studies human-machine interaction at MIT. “We’re terrible supervisors. The aviation industry, the nuclear power industry, the rail industry have shown this for decades.”

Yet Uber placed the burden for preventing crashes on the preoccupied shoulders of humans. That’s the tremendous irony here: In its quest to eliminate the humans who cause more than 90 percent of American crashes, which kill about 40,000 people every year, Uber hung its safety system on the ability of a particular human to be perfect.

There other ways to test potentially life-saving tech. Some autonomous developers require two people in every testing vehicle, one to sit behind the wheel and another to take notes on specific events and system failures during the drive. (Uber originally had two operators in each car, but switched to solo drivers late last year.) The safety driver behind the wheel of the crashed Uber told NTSB investigators she wasn’t watching the road in the moments leading up to the collision because she was looking at the car’s interface—which is built into the center console, outside a driver’s natural line of sight. If another human were handling that job, which includes noting observations about the car’s behavior, the person behind the wheel might have spotted Herzberg—and saved her life.

Or, Uber could have given its system the ability to monitor a driver’s attentiveness to the road, and emit a beep or a buzz if it discovers the person behind the wheel isn’t staying on task. Cadillac’s semi-autonomous SuperCruise system uses an infrared camera on the steering column to watch a driver’s head position, and issue warnings when they look away from the road for too long.

Uber’s system didn’t even have a way to alert the driver when it determined emergency braking was necessary, the report says. Many cars on the market today can detect imminent collisions, and alert the driver with flashing red lights or loud beeping. That sort of feature could have helped here. “That is kind of mind-boggling, that the vehicle system did nothing and they had to depend entirely on the driver,” says Steven Shladover, a UC Berkeley research engineer who has spent decades studying automated systems.

Uber says it’s working on its “safety culture,” and has not yet resumed testing, which it paused after the crash. “Over the course of the last two months, we’ve worked closely with the NTSB,” a spokesperson said in a statement. “As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program.” The company hired former NTSB chair and aviation expert Christopher Hart earlier this month to advise it on safety systems.

Whatever changes Uber makes, they won’t appear in Tempe anytime soon. The company plans to resume testing in Pittsburgh this summer, home to its R&D center. But it’s shutting down its Arizona operation altogether. The move had very human consequences. Uber laid off about 300 workers in the state—many of them safety drivers.


via Wired Top Stories

May 24, 2018 at 02:48PM

Posted in Family | Tagged , | Leave a comment

Apple Blocks Valve App That Lets You Play Steam Games On Your Phone

Apple Blocks Valve App That Lets You Play Steam Games On Your Phone

Earlier this month, Valve announced an official Steam Link app that lets users play Steam games on their phones. Last week, the app came out on Android, with an iOS version nowhere to be found. Now Valve has explained why.

Read more…


via Kotaku

May 24, 2018 at 07:40PM

Posted in Family | Tagged , | Leave a comment

Google will always do evil

Google will always do evil


One day in late April or early May, Google removed the phrase “don’t be evil” from its code of conduct. After 18 years as the company’s motto, those three words and chunks of their accompanying corporate clauses were unceremoniously deleted from the record, save for a solitary, uncontextualized mention in the document’s final sentence.

Google didn’t advertise this change. In fact, the code of conduct states it was last updated on April 5th. The “don’t be evil” exorcism clearly took place well after that date.

Google has chosen to actively distance itself from the uncontroversial, totally accepted tenet of not being evil, and it’s doing so in a shady (and therefore completely fitting) way. After nearly two decades of trying to live up to its motto, it looks like Google is ready to face reality.

In order for Google to be Google, it has to do evil.

Exterior view of Google office with Android Marshmallow

This is true for every major technology company. Apple, Facebook, Amazon, Tesla, Microsoft, Sony, Twitter, Samsung, Nintendo, Dell, HP, Toshiba — every one of these organizations can’t compete in the market without engaging in unethical, inhumane and invasive practices. It’s a sliding scale: The larger the company, the more integrated it is in our everyday lives, the more evil it can be.

Take Facebook for example. CEO Mark Zuckerberg will stand onstage at F8 and wax poetic about the beauty of connecting billions of people across the globe, while at the same time patenting technologies to determine users’ social classes and enable discrimination in the lending process, and allowing housing advertisers to exclude racial and ethnic groups, or families with women and children, from their listings.

That’s not even mentioning the Cambridge Analytica scandal and the 85 million Facebook users whose personal information ended up, without permission, in the hands of an overseas political group during the contentious 2016 presidential election.

Mark Zuckerberg on stage at Facebook's F8 Developers Conference 2015

And then there’s Apple, the largest public company in the world. It’s also one of the most secretive, but even so, it’s been caught engaging in evil. Apple is one of the most notorious tech names when it comes to child labor and inhumane working conditions. It’s been tied to child labor in Africa, and the Chinese factories where its phones are assembled are frequently cited over illegal and lethal practices. At least nine workers at Apple’s key factory partner, Foxconn Technology Group, committed suicide in 2010, prompting international outrage. Yet just this year, Bloomberg found iPhone assembly workers in the Catcher Technology Co. factory were required to stand for up to 10 hours a day in heinous conditions, handling chemicals, dealing with loud machines and being exposed to miniscule metal particles without proper masks, gloves, goggles or ear plugs. After their shifts, employees lived in dirty dorms without showers or hot water.

More than 200 workers from a single Samsung production line had died or fallen seriously ill.

Apple isn’t the only tech company to work with Foxconn or Catcher, and it isn’t the only one accused of encouraging inhumane assembly lines. In 2016, the AP reported more than 200 workers from a single Samsung production line had died or fallen seriously ill, many being diagnosed with leukemia, lymphoma and MS, despite being relatively young — in their 20s and early 30s. Samsung has denied any involvement in the lethal trend.

There’s a simple reason major tech companies often look the other way after these scandals, brushing concerns aside as they continue to work with factories known for employing children and operating in barbaric ways. It’s necessity. In order to remain competitive, Apple needs 200 million new iPhones with each updated model, and the most profitable way to make that happen is to partner with Foxconn or Catcher. In Apple’s math, the bottom line outweighs the well-being of workers on the assembly line.


The people who actually work at Apple or any major tech company are not monsters. Ask any Apple employee about child labor in iPhone factories and they’ll assuredly express disgust and outrage — but the company itself is far more powerful than its individualized workforce.

Which brings us back to Google. Earlier this month, roughly a dozen employees quit over the company’s involvement in Project Maven, a military program that aims to use AI systems to analyze drone footage. Though Google insists the technology will be applied to “non-offensive uses only,” some employees are concerned about its potential use in drone strikes. On top of those who quit, nearly 4,000 Google employees have signed a petition demanding the company pull out of Project Maven and refuse to work with the military in the future.

The chances of Google actually cutting ties with the US military are miniscule.

The chances of Google actually cutting ties with the US military are miniscule. Besides, quitting wouldn’t stop Project Maven from moving forward; it would only cut Google out of the process, passing the future of AI drone technology to another company. At least with Google, there’s the underlying promise that these systems won’t be evil.

Well. That was true until just a few weeks ago.

The reason major technology companies have so much power to be evil is because many of them have found ways to do good in our lives. These organizations are big for a reason — Google is the backbone of the internet; Apple is a leader in gadget design and ecosystems; Samsung produces a vast range of devices for a wide swath of people; Facebook truly does connect the world. But as a tech company’s propensity to do good grows, so too does its ability to do terrible things. That’s why Google’s motto — “don’t be evil” — was such a poignant reminder of the humanity necessary to keep these companies in check. Emphasis on the was.

Images: Getty (Google building); pestoverde / Flickr (Mark Zuckerberg); Bobby Yip / Reuters (Foxconn factory)


via Engadget

May 24, 2018 at 01:36PM

Posted in Family | Tagged , | Leave a comment

Researchers identify a protein that viruses use as gateway into cells

Researchers identify a protein that viruses use as gateway into cells

Enlarge /

An electron micrograph of multiple copies of the chikungunya virus.

The word “chikungunya” (chik-en-gun-ye) comes from Kimakonde, the language spoken by the Makonde people in southeast Tanzania and northern Mozambique. It means “to become contorted,” because that’s what happens to people who get infected. The contortion is a result of severe and debilitating joint pain. Chikungunya was first identified in Tanzania in 1952, but by now cases have been reported around the globe. There is no cure; the CDC recommends that “travelers can protect themselves by preventing mosquito bites.”

Chikungunya is only one of a family of viruses transmitted through mosquitoes for which we have no targeted treatment. This may partially be due to the fact that we didn’t know how they get into our cells. But for chikungunya, we’ve just found one of the proteins responsible.

Identification via deletion

Researchers used the CRISPR-Cas9 DNA editing system to delete more than twenty-thousand mouse genes—a different one in each cell in a dish. Then they added chikungunya to the dish, isolated the cells that didn’t get infected, and looked to see which gene they lacked. This gene would encode a protein required for viral infection, since infection didn’t happen in its absence.

In this way they found a gene encoding an adhesion molecule that was required for chikungunya to infect these cells. Similar genes are found in other mammals, birds, and amphibians, and they are homologous to an adhesion molecule used as an entry receptor for another class of viruses. This particular gene goes by the catchy name of Mxra8. Interestingly, no similar protein is found in mosquitoes.

Since the scientists were using a special “cell-culture-adapted vaccine strain” of chikungunya, they repeated their experiment with an Asian strain and a West African strain of the virus. Neither could infect cells lacking Mxra8. Nor could some other viruses in the same family (called arthritogenic alphaviruses): Ross River virus, Mayaro virus, Barmah Forest virus, and O’nyong nyong virus. However, an East/Central/South African strain of chikungunya and a few others in the same family did not seem to be quite as dependent on Mxra8.

In human cells, too

Results were not limited to mouse cells in petri dishes. They also held true in human cells of the various types infected by chikungunya, like fibroblasts, osteoblasts, chondrocytes, and skeletal muscle cells. Humans have four versions of Mxra8, and knocking out each of them diminished the ability of chikungunya to infect the cells. Mice treated with antibodies to Mxra8 had reduced levels of infection—the antibodies bind to the Mxra8 molecules on the surface of the mouse cells, so the virus can’t access it to get in.

Mxra8 doesn’t seem to be required for viral replication, only for viral entry into cells. Further experiments that home in on exactly where the virus binds to it could hopefully lead to the development or identification of small molecules that block the interaction, barring the virus from getting into the cells and preventing infection and disease.

Nature, 2018. DOI: 10.1038/s41586-018-0121-3 (About DOIs).


via Ars Technica

May 24, 2018 at 12:03PM

Posted in Family | Tagged , | Leave a comment