Suspect arrested for cyber bank heists that amassed $1.2 billion
https://ift.tt/2DW4I9Z
Europol announced today that the suspected leader of an international bank heist scheme has been arrested. The arrest was a result of an investigation that involved a number of cooperating law enforcement groups including the Spanish National Police, Europol, the FBI and the Romanian, Belarusian and Taiwanese authorities. The person was arrested in Alicante, Spain.
Since the crime group began its cyberattacks in 2013, they’ve hit more than 100 financial institutions in 40 countries around the world. They’re said to have stolen over $1.2 billion. The crime group started with a malware campaign called Anunak, which later led to more sophisticated versions known as Carbanak and, later, Cobalt. The team would send phishing emails with malicious attachments to bank employees, and once the malware was downloaded, it gave the hackers control over the banks’ machines and access to servers that controlled ATMs.
They used three main methods to fraudulently obtain cash. In some cases, they would instruct ATMs to dispense cash at certain times and members of the crime group would wait nearby and grab the cash once it was released. They also took advantage of money transfer systems and in other instances, would inflate bank balances and have money mules withdraw that amount from ATMs. The stolen cash was ultimately laundered with cryptocurrencies.
"This global operation is a significant success for international police cooperation against a top level cybercriminal organisation," Steven Wilson, head of Europol’s European Cybercrime Centre, said in a statement. "The arrest of the key figure in this crime group illustrates that cybercriminals can no longer hide behind perceived international anonymity. This is another example where the close cooperation between law enforcement agencies on a worldwide scale and trusted private sector partners is having a major impact on top level cybercriminality."
Flat Earth advocate finally launches his homemade rocket
https://ift.tt/2pFafxn
For years, "Mad" Mike Hughes has not only insisted that the Earth is flat, but has maintained he could prove it by launching himself into space with his own rocket. He even claimed to have launched a homebrew rocket in 2014, but didn’t have evidence of it besides his recovery from the landing. However, he finally did it — not that he’s about to change scientists’ minds. Hughes’ steam-powered vessel launched near Amboy, California, climbing to about 1,875 feet before coming down in the Mojave Desert. Despite the clear lack of safety features, paramedics determined that Hughes should be fine.
He had originally pegged the launch for November, but had to postpone the launch multiple times due to a mix of legal requirements (the Bureau of Land Management wasn’t fond of him firing a crewed rocket on public ground) and engineering troubles. He eventually launched from private land provided by Amboy’s owner, and turned a mobile home into a vertical ramp to make sure he stayed on private lands.
Hughes hopes to fly much, much higher the next time around. His aim is to build a rocket that will launch from a balloon and take him to an altitude of 68 miles — roughly where space begins. If all goes according to plan, that would take place in August.
The irony, as you might guess, is that this launch wouldn’t even be possible with a flat Earth. A disc-shaped planet would have gravity that pulls straight down at only one point, and would become increasingly horizontal. Unless Hughes had perfect placement, his rocket would likely go very sideways. And that’s assuming the atmosphere stayed put (it would likely float off into space) or that Earth would maintain a steady distance from the Sun (Earth’s orbit keeps it from crashing into the star). Hughes is dependent on the science that disproves his beliefs just for the sake of living, let alone climbing high enough to discover that he’s wrong.
) pick up pedestrian Elaine Herzberg before the vehicle hit her. Velodyne president Marta Thoma Hall told Bloomberg, “We are as baffled as anyone else. Certainly, our lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our lidar doesn’t make the decision to put on the brakes or get out of her way.”
The company, which supplies lidar units to a number of tech firms testing
, wants to make sure its equipment isn’t blamed for the crash. The accident took place around 10 p.m., and in fact, lidar works better at night than during the day because the lasers won’t suffer any interference from daylight reflections. T
comments that reveal a lack of understanding about the systems that underpin self-driving vehicles.
Thoma Hall’s comments have been about clarifying a lidar array’s role in the driving task; namely, that even when the lasers detect an object, “it is up to the rest of the system to interpret and use the data to make decisions. We do not know how the Uber system of decision-making works.” If Uber’s software doesn’t process the data properly, then it doesn’t matter what the lasers register.
Her statements to
Bloomberg
and the
BBC
echo those of outside autonomous researchers. One expert in the field of autonomy, Bryant Walker Smith, told
Reuters
, “Although this video isn’t the full picture, it strongly suggests a failure by Uber’s automated driving system….”
, Smith said the Uber software probably “classified [Herzberg] as something other than a stationary object.” Another expert told
Reuters
the cameras and radar should have taken note of Herzberg, so, “Though no information is available, one would have to conclude based on this video alone, that there are problems in the Uber vehicle software that need to be rectified.”
, the autonomous driving division of Alphabet — Alphabet also owns Google — told the Washington Post, “Our car would have been able to handle it,” and not hit Herzberg.
and Uber have a history, though; when a Waymo engineer defected to Uber, Waymo said he took trade secrets with him, so it sued Uber. The two companies settled the court case with
in Pittsburgh, where more than 700 engineers write the autonomous software and test the company’s products. Velodyne’s Thoma Hall said she hasn’t been in touch with Uber, but her company will soon speak to investigators. The
Video suggests huge problems with Uber’s driverless car program
http://ift.tt/2GjcMah
There’s something very wrong with Uber’s driverless car program.
On Wednesday night, police released footage of Sunday night’s deadly car crash in Tempe, Arizona, where an Uber self-driving car crashed into 49-year-old Elaine Herzberg. The details it reveals are damning for Uber.
“The idea that she ‘just stepped out’ or ‘came out in a flash’ into the car path is clearly false,” said Tara Goddard, an urban planning professor at Texas A&M University, after seeing the video. “It seems like the system should have responded.”
The video shows that Herzberg crossed several lanes of traffic before reaching the lane where the Uber car was driving. You can debate whether a human driver should have been able to stop in time. But what’s clear is that the vehicle’s lidar and radar sensors—which don’t depend on ambient light and had an unobstructed view—should have spotted her in time to stop.
On top of that, the video shows that Uber’s “safety driver” was looking down at her lap for nearly five seconds just before the crash. This suggests that Uber was not doing a good job of supervising its safety drivers to make sure they actually do their jobs. The combination of these failures—and Herzberg’s decision to jaywalk in the first place—led to her death.
But zooming out from the specifics of Herzberg’s crash, the more fundamental point is this: conventional car crashes killed 37,461 in the United States in 2016, which works out to 1.18 deaths per 100 million miles driven. Uber announced that it had driven 2 million miles by December 2017 and is probably up to around 3 million miles today. If you do the math, that means that Uber’s cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States.
Of course, it’s possible that Uber just got exceptionally unlucky. But it seems more likely that, even with the safety driver, Uber’s self-driving cars are way more dangerous than a car driven by the average human driver.
This shouldn’t surprise us. Uber executives know they’re behind Waymo in developing a self-driving car, and they’ve been pulling out all the stops to catch up. Uber inherited a culture of rule-breaking and corner-cutting from its founder and former CEO Travis Kalanick. That combination made a tragedy like this almost inevitable.
Uber probably wasn’t at fault, legally speaking, in recent crashes
An Uber self-driving car in San Francisco in 2017.
Justin Sullivan/Getty Images
Consider these recent crashes involving self-driving Uber cars:
In March 2017, an Uber self-driving car was struck on the left side as it went through an intersection in Tempe, Arizona. Uber was in the right-most lane on a six-lane road, approaching an intersection. The other two lanes in Uber’s direction were backed up with traffic. The other car was traveling in the opposite direction and making a left turn. The driver of that other vehicle said that cars stopped in the other lanes blocked her view, preventing her from seeing the Uber vehicle.
“Right as I got to the middle lane about to cross the third, I saw a car flying through the intersection, but I couldn’t brake fast enough to completely avoid the collision,” the driver of the non-Uber car said in the police report. Police cited the non-Uber driver for failing to yield the right of way. The Uber driver was not cited.
In February 2018, an Uber vehicle in Pittsburgh collided with another vehicle after the other car made a left turn in front of it. The Uber vehicle had its turn signal on, and the other driver thought this meant the Uber vehicle was going to turn at the intersection rather than go straight through. Uber says the car had its turn signal on because it was planning to change lanes.
Police did not determine who was at fault in the accident. But a Pennsylvania attorney told Ars that “generally speaking, before you take a left-hand turn, you’re required to ensure there’s not traffic coming from the other direction.”
In March 2018, we had this Sunday’s deadly crash in Tempe. Authorities have not reached any final conclusions about the case, but experts have told Ars there’s good reason to believe Herzberg may have been at fault, legally speaking. She was jaywalking in the middle of the night in a poorly lit area outside of a marked crosswalk.
“I think that preliminary results coming out is that the automation of the car was not at fault because the pedestrian stepped into the road,” said Mohamed Abdel-Aty, a civil engineer and traffic safety expert at the University of Central Florida.
So in all three of these incidents, there’s a strong argument that the other people involved—not the Uber car—were legally at fault for the crashes.
Jessica McLemore took this picture of the damage to her car shortly after a crash with an Uber vehicle in Pittsburgh in February 2018.
Jessica McLemore
“One of my big concerns about this incident is that people are going to conflate an on-the-spot binary assignment of fault with a broader evaluation of the performance of the automated driving system, the safety driver, and Uber’s testing program generally,” said Bryant Walker Smith, a law professor at the University of South Carolina.
“Human drivers recognize that they are going to deal with all kinds of behaviors that are not exactly lawful,” he added. “An obligation imposed under most if not all state vehicle codes are that drivers shall take due care to avoid a collision. You never get to say, well of course I hit them, they were in the road in my way.”
Indeed, it’s entirely possible to imagine a self-driving car system that always follows the letter of the law—and hence never does anything that would lead to legal finding of fault—but is nevertheless way more dangerous than the average human driver. Indeed, such a system might behave a lot like Uber’s cars do today.
For example, in that March 2017 collision in Tempe, the Uber driver reported that he was traveling 38 miles per hour at the time of the crash—just shy of the 40-mile-per-hour speed limit.
“As I entered the intersection, I saw the vehicle turning left,” he wrote. “There was no time to react as there was a blind spot created by the line of southbound traffic.”
The Uber car may have had a legal right to zip past two lanes of stopped cars at 38 miles per hour. But a prudent driver could have anticipated the possibility of a car in the blind spot—or, for that matter, a pedestrian trying to dart between the stopped cars in the next lane—and slowed down to 30 or even 20 miles per hour.
So, too, in Pittsburgh. There was no police report, so we don’t know how fast the Uber car was traveling or if it tried to stop. But a prudent driver approaching an intersection where an oncoming car has a left turn signal on will slow down a bit and be prepared to stop—just in case the other car decides to turn illegally.
As for this month’s crash in Tempe, there seems to be little doubt that there was a serious failure of the car’s technology. It may or may not have been feasible for the car to stop based on camera data. But lidar works just as well at night as it does in the daytime. Even if Uber’s software didn’t know Herzberg and her bicycle was a person, a car should always slow down if it sees an object that big moving into its lane.
Moreover, if the car really couldn’t have stopped in time to avoid killing Herzberg, that seems like a sign that the car was driving too quickly. It’s not like she jumped out from behind some bushes.
Cambridge Analytica breach results in lawsuits filed by angry Facebook users
http://ift.tt/2GiY6Yo
In the wake of the ongoing Cambridge Analytica debacle, Facebook has now been sued in federal court in San Francisco and San Jose. These new cases claim violations of federal securities laws, unfair competition, and negligence, among other allegations.
The pair of cases stem from recent revelations that Cambridge Analytica, a British data firm that contracted with the Donald Trump presidential campaign, retained private data from 50 million Facebook users despite claiming to have deleted it. New reporting on Cambridge Analytica has spurred massive public outcry from users and politicians, with CEO Mark Zuckerberg calling it a “breach of trust.”
These two cases, which were filed on March 20, could be just the first among what could be a coming wave of similar lawsuits.
One suit, filed by Lauren Price, of Maryland, says that she was served political ads during the 2016 presidential campaign and believes that she is part of the 50 million affected users. However, nowhere in her lawsuit does she specify why she thinks this—if she’s not actually on the list, then she would lack standing, and the case would likely be dismissed.
“Facebook lies within the penumbra of blame,” her complaint argues.
She seeks to represent “All persons who registered for Facebook accounts in the United States and whose Personal Information was obtained from Facebook by Cambridge Analytica without authorization or in excess of authorization.”
Her lawyers did not respond to Ars’ request for comment.
A second lawsuit is being brought by Fan Yuan, a man who describes himself as a Facebook stockholder who bought stock at an “inflated price” after February 3, 2017. The suit claims that the company made false statements when it did not reveal the breach. As such, when Facebook’s stock price dropped after the news broke late last week, he and many other investors lost money.
Facebook has refused to answer Ars’ questions or to provide many further details beyond public statements by its top executives and lawyers. The company will not say precisely what data was shared or when or how it will formally notify affected users.
“We are committed to vigorously enforcing our policies to protect people’s information,” Paul Grewal, Facebook’s deputy general counsel, said in a statement. “We will take whatever steps are required to see that this happens.”
In a post, Zuckerberg said that the company would impose strict changes going forward.
“We will restrict developers’ data access even further to prevent other kinds of abuse,” he wrote on Wednesday. “For example, we will remove developers’ access to your data if you haven’t used their app in three months. We will reduce the data you give an app when you sign in—to only your name, profile photo, and email address. We’ll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we’ll have more changes to share in the next few days.”