The latest from Elon Musk: Your very own Boring Company flamethrower

Elon Musk promises and Elon Musk delivers, eventually.

For a guy who is going to save the planet with electric cars, Elon Musk sure likes burning fossil fuels. His SpaceX Merlin engines burn RP1, a highly refined form of kerosene, oxidized by liquid oxygen, which takes a lot of energy to make. Don Mackenzie of the Sustainable Transportation Lab ran the numbers and each launch puts out about 640 metric tonnes of CO2, about as much as is saved by 10 Teslas compared to regular cars over their lifetimes.

SpaceX rockets do useful things, like putting satellites and Tesla Roadsters into space. But the same cannot be said by Elon Musk’s latest product — a flamethrower. He promised it back in December in a tweet, and yes, for six hundred bucks, you can soon be the proud owner of an official Boring Company flamethrower.

flamethower ad© The Boring Company

It is evidently up on a password-protected website, and appears to be an Airsoft BB gun converted with some form of propane burner. It seems totally useful and fun for the kids.

from TreeHugger http://ift.tt/2GkPq10
via IFTTT

The Dirty War Over Diversity Inside Google

Fired Google engineer James Damore says he was vilified and harassed for questioning what he calls the company’s liberal political orthodoxy, particularly around the merits of diversity.

Now, outspoken diversity advocates at Google say that they are being targeted by a small group of their coworkers, in an effort to silence discussions about racial and gender diversity.

In interviews with WIRED, 15 current Google employees accuse coworkers of inciting outsiders to harass rank-and-file employees who are minority advocates, including queer and transgender employees. Since August, screenshots from Google’s internal discussion forums, including personal information, have been displayed on sites including Breitbart and Vox Popoli, a blog run by alt-right author Theodore Beale, who goes by the name Vox Day. Other screenshots were included in a 161-page lawsuit that Damore filed in January, alleging that Google discriminates against whites, males, and conservatives.

What followed, the employees say, was a wave of harassment. On forums like 4chan, members linked advocates’ names with their social-media accounts. At least three employees had their phone numbers, addresses, and deadnames (a transgender person’s name prior to transitioning) exposed. Google site reliability engineer Liz Fong-Jones, a trans woman, says she was the target of harassment, including violent threats and degrading slurs based on gender identity, race, and sexual orientation. More than a dozen pages of personal information about another employee were posted to Kiwi Farm, which New York has called “the web’s biggest community of stalkers.”

Meanwhile, inside Google, the diversity advocates say some employees have “weaponized human resources,” by goading them into inflammatory statements, which are then captured and reported to HR for violating Google’s mores around civility or for offending white men.

Engineer Colin McMillen says the tactics have unnerved diversity advocates and chilled internal discussion. “Now it’s like basically anything you say about yourself may end up getting leaked to score political points in a lawsuit,” he says. “I have to be very careful about choosing my words because of the low-grade threat of doxing. But let’s face it, I’m not visibly queer or trans or non-white and a lot of these people are keying off their own white supremacy.”

Targeted employees say they have complained to Google executives about the harassment. They say Google’s security team is vigilant about physical threats and that Danielle Brown, Google’s chief diversity and inclusion officer, who has also been targeted by harassers, has been supportive and reassuring. But, they say they have not been told the outcome of complaints they filed against coworkers they believe are harassing them, and that top executives have not responded assertively to concerns about harassment and doxing. As a result, some employees now check hate sites for attempts at doxing Google employees, which they then report to Google security.

Google declined to respond to questions due to ongoing litigation, but a Google spokesperson said the company has met with every employee who expressed concern.

The complaints underscore how Google’s freewheeling workplace culture, where employees are encouraged to “bring your whole self to work” and exchange views on internal discussion boards, has turned as polarized and toxic as the national political debate.

Aneeta Rattan, an assistant professor of organizational behavior at the London School of Business, says organizations such as Google that want to foster an open environment have to establish norms and rules of engagement around difficult conversations. “They don’t want to have a giant list of things you can’t say,” but they should identify parameters, says Rattan, who has studied prejudice in the workplace and the ability of groups of people to change their minds. “A lot of this is about stoking complex thought, which means everyone will leave somewhat unhappy,” she says. “That is not something all organizations want to foster.”

The politicized tension inside Google echoes the challenge that Silicon Valley tech giants face moderating divisive content on their social-media platforms. Tech companies sold themselves as open and neutral forces for good, espousing free expression both on their corporate campuses and on the internet. But critics say that too often, the social-media sites have become hotbeds of hate speech.

Some anger from the alt-right is now aimed at the tech companies themselves. After Damore’s memo became public in August, a Breitbart headline screamed, “Google’s Social Justice Warriors Create Wrongthink Blacklists.” Earlier this month, James O’Keefe’s Project Veritas posted surreptitious video of Twitter employees discussing the company’s moderation policies.

Yonatan Zunger, a high-ranking veteran engineer who left Google eight months ago, says the internal culture has become a textbook case of the “paradox of tolerance,” the notion that if a society is tolerant without limit, it will be seized upon by the intolerant.

The combatants represent just a sliver of Google’s more than 75,000 employees. Executives seem to want everyone to get back to work, rather than be forced into the awkward position of refereeing a culture war. “Just like they’re reporting me, I’m reporting them as well,” says Alon Altman, a staff engineer and diversity advocate. After Damore’s memo was disclosed in August, Altman says the complaints from both sides amounted to “a denial-of-service attack on human resources.”

Google is an important symbol in Silicon Valley’s struggles with diversity. Damore’s suit claims Google discriminates against whites, males, and conservatives.
At the same time, Google faces a Department of Labor investigation and a private lawsuit from four former employees claiming that it discriminates against women in pay and promotion. The company was the first tech giant to release its diversity numbers in 2014, but has not made significant progress since.

Diversity advocates say that by trying to stay neutral, Google is being exploited by instigators, who have disguised a targeted harassment campaign as conservative political thought.

One flashpoint is Google’s training for employees about sexual, racial, and ethnic diversity. In his memo, Damore said the programs “are highly politicized which further alienates non-progressives.” But one black woman employee offers an opposing complaint. She says the programs lack context about discrimination and inequality and focus on interpersonal relationships, instructing employees to watch what they say because it might hurt someone’s feelings. “It robs Google of the chance to discuss these issues,” and leaves criticisms unanswered, she says. She says co-workers and her manager have described diversity as “just another box to check and a waste of time.”

Zunger, the former employee, says Google managers often are put in an impossible position while trying to resolve disputes. As a consequence, sometimes managers tried to restore calm by telling everyone to knock it off. Zunger says this was well-intentioned, but ultimately counterproductive. “Once an awareness of contempt is present in the room, not talking about it doesn’t make it go away,” he said.

Until her name and face showed up on a website run by Beale, the right-wing provocateur also known as Vox Day, Fong-Jones says she did not appreciate what she was up against. Like many diversity advocates, Fong-Jones serves as an informal liaison between under-represented minorities and management is an unpaid second shift. Over the past few years, she learned to keep a close eye on conversations about diversity issues. It began subtly. Coworkers peppered mailing lists and company town halls with questions: What about meritocracy? Isn’t improving diversity lowering the bar? What about viewpoint diversity? Doesn’t this exclude white men?

Fong-Jones initially assumed that the pushback stemmed from genuine fear or concern. But that changed in August when Damore’s memo, arguing that women are less biologically predisposed to become engineers and leaders, went viral. On Google’s internal communications channels, employees debated Damore’s arguments.

Beale published leaked snippets of a conversation between Fong-Jones and a colleague, where Fong-Jones argued that Damore should not have been allowed to publish his memo on an internal Google site. That fired up Beale. “Google’s SJWs [social-justice warriors] are starting to get nervous as evidence of their internal thought-policing begins to leak out into the public,” Beale wrote. “And never forget, they genuinely believe that they are better-educated, as well as our moral and intellectual superiors, because Google only hires the smartest, best-educated people, right?”

Fong-Jones is used to being harassed online. But she was quickly flooded with direct messages on Twitter containing violent threats and degrading and transphobic slurs based on gender identity, race, and sexual orientation. One commenter on Vox Popoli wrote that, “they should pitch all those sexual freaks off of rooftops.”

That’s when it clicked: perhaps some of her coworkers’ questions had not been in good faith. “We didn’t realize that there was a dirty war going on, and weren’t aware of the tactics being used against us,” she says. The stakes soon became clear. A few days later, alt-right figurehead Milo Yiannopoulos shared an image with his 2.5 million Facebook followers featuring the Twitter bios and profile pics of eight advocates at Google, many of them trans employees.

As the internal debate raged in the wake of Damore’s memo, McMillen says that he knows of at least 10 coworkers who were called into HR for making political statements related to the document, with consequences ranging from verbal warnings to a reduced performance-review score. McMillen was told by HR not to do anything hiring or promotion related for a year. Altman got a verbal warning for writing on an internal board that certain employees should be fired. “I meant only bigoted white men should be fired. They interpreted it as applying to all white men,” Altman says.

The roots of the tension go back years. Former Google engineer Cory Altheide told WIRED that he noticed racist and other hate-filled posts on Google discussion boards before he quit in 2015. In a memo he wrote after leaving the company and circulated this month, he pointed to a post on a blog run by a Google employee that said, “Blacks are not equal to whites. Therefore the ‘inequality’ between these races is expected and makes perfect sense.” WIRED was not able to confirm the identity of the employee.

Some employees see similarities between some of the behavior inside Google and alt-right manuals for fighting advocates for social justice, such as one written by Beale that instructs readers to “Document their every word and action,” “Undermine them, sabotage them, and discredit them,” and “Make the rubble bounce” on your way out the door.

Beale says they’re right. “I know that there are a number of people there who have read [the guide], I know that they’re using it,” Beale told WIRED. He claims to have had contacts inside the company for years and dozens of followers. He says he doesn’t know if Damore has read his guide, but is following the playbook. Damore says he has not read the manual.

The War Within

from Wired Top Stories http://ift.tt/2neQ741
via IFTTT

How Baidu plans to profit from its free autonomous-car technology

Facebook’s experimental chatbot is learning to do small talk

A chatbot trained to engage its partner on personal topics can learn to predict information about the other participant.

Background: Even with AI, chatbots are brittle systems that typically can’t talk about anything outside of what they’ve been trained… Read more

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2Gl7wA6
via IFTTT

Algorithms are making American inequality worse

William Gibson wrote that the future is here, just not evenly distributed. The phrase is usually used to point out how the rich have more access to technology, but what happens when the poor are disproportionately subject to it?

In Automating Inequality, author Virginia Eubanks argues that the poor are the testing ground for new technology that increases inequality. The book, out this week, starts with a history of American poorhouses, which dotted the landscape starting in the 1660s and were around into the 20th century. From there, Eubanks catalogues how the poor have been treated over the last hundred years, before coming to today’s system of social services that increasingly relies on algorithms.

Eubanks leaves no uncertainty as to her position on whether such automation is a good thing. Her thesis is that the punitive and moralistic view of poverty that built the poorhouses never left us, and has been wrapped into today’s automated and predictive decision-making tools. These algorithms can make it harder for people to get services while forcing them to deal with an invasive process of personal data collection. As examples, she profiles three different programs: a Medicaid application process in Indiana, homeless services in Los Angeles, and child protective services in Pittsburgh.

Eubanks spoke to MIT Technology Review about when social services first became automated, her own experience with predictive algorithms, and how these flawed tools give her hope that inequality will be put into such stark relief that we will have to address how we treat our poor, once and for all.

What are the parallels between the poorhouses of the past and what you call today’s digital poorhouses?

These high-tech tools we’re seeing—I call it “the regime of data analytics”—are actually more evolution than revolution. They fit pretty well within the history of poverty policy in the United States.

When I originally started this work, I thought the moment that we’d see these digital tools really arrive in public assistance and public services might be in the 1980s, when there was a widespread uptake of personal computers, or in the 1990s when welfare reform passed. But in fact, they arose in the late 1960s and early 1970s, just as a national welfare rights movement was opening up access to public assistance.

At the same time, there was a backlash against the civil rights movement going on, and a recession. So these elected officials, bureaucrats, and administrators were in this position where the middle-class public was pushing back against the expansion of public assistance. But they could no longer use their go-to strategy of excluding people from the rolls for largely discriminatory reasons. That’s the moment that we see these technologies arrive. What you see is an incredibly rapid decline in the welfare rolls right after they’re integrated into the systems. And that collapse has continued basically until today.

So for some of these algorithms that we have right now, machine-learning tools will replace them. In your research did you come across any issues that are going to arise once we have more AI within these systems?

I don’t know that I have a direct response to it. But one thing I will say is that the Pittsburgh child services system often gets written about as if it’s AI or machine learning. And in fact, it’s actually just a simple statistical regression model.

I do think it’s really interesting, the way we tend to math-wash these systems, that we have a tendency to think they’re more complicated and harder to understand than they actually are. I suspect that there’s a little bit of technological hocus-pocus that happens when these systems come online and people often feel like they don’t understand them well enough to comment on them. But it’s just not true. I think a lot more people that are currently talking about these issues are able to, confident to, and should be at the table when we talk about them.

You have a great quote from a woman on food stamps who tells you her caseworker looks at her purchase history. You appear surprised, so she says, “You should pay attention to what happens to us. You’re next.” Do you have examples of technologies that the general population deals with that are like this example?

I start the book by talking about a case where my partner was attacked and very badly beaten. After he had gotten some major surgery, we were told at the pharmacy when I was trying to pick up his pain meds that we no longer had health insurance. In a panic, I called my insurance company and they told me basically that we were missing a start date for our coverage.

I said, “You know, well, that’s odd because you paid claims that we made a couple of weeks ago, so we must have had a start date at that point.” And they said, “Oh, it must have just been a technical error. Somebody must have accidentally erased your start date or something.”

I was really suspicious that what was actually going on was that they had suspended our coverage while they investigated us for fraud (I had been working on these kinds of fraud detection tools for a long time by then). And we had some of the most common indicators that insurance fraud was occurring: we had only had our insurance for a couple of days before the attack, we are not married, and he had received controlled substances to help him manage his pain.

I will never know whether we were being investigated, but either way they were telling us that we owed upward of $60,000 in medical bills that had been denied because we weren’t covered when the claims went through. It caused extraordinary stress.

So these systems are actually already at work, sort of invisibly, in many of the services that we interact with on a day-to-day basis, whether we are poor, working class or professional middle class, or economic elites. But they don’t affect us all equally. My partner and I were able to endure that experience because we had resources to help us get through that experience, and also because it only happened to us once. It wasn’t coming from every direction. It wasn’t an overwhelming force where we’re hearing it from child protective services, and also Medicaid, and also food stamps, and also the police.

I think it can be a lot harder for folks who are dealing with many of these systems at the same time.

Is there anything good happening because of these tools?

One of the reasons I’m optimistic is that these systems are also really incredible diagnostics. They make inequities in our country really concrete, and really evident. Where one of the systems goes spiraling out of control is a place where we have a deep inequality that needs to be addressed. And so I believe that the combination of the movement work that’s already happening now and increased attention to systems like these can really create incredible pressure to create a more just social system overall.

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2rKzNMS
via IFTTT

This drone learned to fly through streets by studying driverless-car data

Facebook’s experimental chatbot is learning to do small talk

A chatbot trained to engage its partner on personal topics can learn to predict information about the other participant.

Background: Even with AI, chatbots are brittle systems that typically can’t talk about anything outside of what they’ve been trained… Read more

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2neu1Oo
via IFTTT

You should be listening to video game soundtracks at work

As I write these words, a triumphant horn is erupting in my ear over the rhythmic bowing of violins. In fact, as you read, I would encourage you to listen along—just search “Battlefield One.” I bet you’ll focus just a bit better with it playing in the background. After all, as a video game soundtrack it’s designed to have that exactly that effect.

This is, by far, the best Life Pro Tip I’ve ever gotten or given: Listen to music from video games when you need to focus. It’s a whole genre designed to simultaneously stimulate your senses and blend into the background of your brain, because that’s the point of the soundtrack. It has to engage you, the player, in a task without distracting from it. In fact, the best music would actually direct the listener to the task.

Plenty of studies show that having some sound around you can help you focus, probably because it gives your subconscious something to tune out. It doesn’t have to focus on that coughing coworker or the occasional sound of doors closing, so you aren’t distracted by intermittent interruptions. Music seems to focus us the best, but not just any music. The latest #1 single is more likely to make you sing along and tap your toes than settle into your work day.

Silence, on the other hand, seems to make office workers slower and less proficient than their music-listening compatriots. Even some surgeons use music to get in the groove, and research suggests those who do perform operations more efficiently and with higher accuracy.

There isn’t a wealth of research on working while listening to video game soundtracks, specifically. But they do seem to check off several evidence-based boxes for creating an optimal work environment.

#1 No lyrics

Thanks to endless years of evolution, your brain is designed to detect humans in all forms. Your eyes have a propensity to see faces (even where there are none) and your ears are tuned to the frequency of human voices. This is why hearing someone talk is so distracting—your brain keeps trying to turn your attention to whoever is speaking instead of whatever you’re supposed to be doing. Bustling coffee shops don’t have this effect, because the voices blend together and stop being recognizable as language. But in an open office plan, human speech slips into your range of hearing just often enough to keep your mind wandering. In fact, a study on open offices found that broadcasting speech was the least conducive to productivity, while continuous background noise actually boosted performance.

Video game soundtracks rarely have human voices, and when they do, they’re generally singing sounds (ethereal oooohs, spooky aaaaaahs, and the like) rather than actual words.

The one strange exception is The Sims. Some of the radio stations in-game feature Sims talking, but since your virtual world citizens speak Simlish, your ears don’t really pick it up as language—because really, it isn’t. The soundtrack to The Sims is incredibly conducive to efficiency, probably because you’re supposed to play the game for hours doing tasks that are arguably kind of boring. You draw walls and place furniture, you tell your Sims to go to the bathroom, then you wait as they slowly eat the lasagna they made for dinner. Just try playing The Sims in silence. It’s kind of dull. But with that cheerful music in the background, you’re compelled to keep going.

#2 Relatively constant, low volume

Most music meant to engage you varies in volume, because, well, loud music is exciting and quiet music is soothing. Flipping between the two is an easy way to change the whole tone. But if a song suddenly turns up to 11 while you’re in the middle of writing a sentence, you’ll get distracted. That’s the opposite of what you want. You’re looking for no surprises. Smooth crescendos, which video games definitely tend to have, are noticeable without being totally distracting—they carry you on, which is exactly what you want.

Even too-loud ambient noise is distracting. One study found that loud sounds impair your ability to process information, whereas low or moderate background noise actually boosts productivity and creativity.

#3 Fairly fast-paced

Not all classical music is slow, but there’s a reason that it’s often relaxing: plenty of it is sweet and melodic. But you’re not looking for calming melodies. You need to stimulate your mind.

Video game music, almost by definition, can’t be soothing. No one will play straight through a 22-hour virtual plot (multiple times) if they’re chilled out. Composers need to create some sense of engagement and excitement—without making it exhausting.

It’s a bit like why rap and hip hop are great workout music—the rhythm and flow push you along and keep up your motivation. Actual scientific studies show that athletes perform better when given rhythmic music to listen to. Those genres actually work well if you’re doing a mindless, repetitive task, since they give your brain something else to focus on. But if your task involves any reading or writing, you need something without lyrics.

There are a few excellent playlists on Spotify that offer hours of music, or you can just listen to The Sims soundtrack endlessly on YouTube. Just don’t learn Simlish or you’ll never get anything done.

from Popular Science – New Technology, Science News, The Future Now http://ift.tt/2rEUUAv
via IFTTT

Now even YouTube serves ads with CPU-draining cryptocurrency miners

YouTube was recently caught displaying ads that covertly leach off visitors’ CPUs and electricity to generate digital currency on behalf of anonymous attackers, it was widely reported.

Word of the abusive ads started no later than Tuesday, as people took to social media sites to complain their antivirus programs were detecting cryptocurrency mining code when they visited YouTube. The warnings came even when people changed the browser they were using, and the warnings seemed to be limited to times when users were on YouTube.

On Friday, researchers with antivirus provider Trend Micro said the ads helped drive a more than three-fold spike in Web miner detections. They said the attackers behind the ads were abusing Google’s DoubleClick ad platform to display them to YouTube visitors in select countries, including Japan, France, Taiwan, Italy, and Spain.

The ads contain JavaScript that mines the digital coin known as Monero. In nine out of 10 cases, the ads will use publicly available JavaScript provided by Coinhive, a cryptocurrency-mining service that’s controversial because it allows subscribers to profit by surreptitiously using other people’s computers. The remaining 10 percent of the time, the YouTube ads use private mining JavaScript that saves the attackers the 30 percent cut Coinhive takes. Both scripts are programmed to consume 80 percent of a visitor’s CPU, leaving just barely enough resources for it to function.

“YouTube was likely targeted because users are typically on the site for an extended period of time,” independent security researcher Troy Mursch told Ars. “This is a prime target for cryptojacking malware, because the longer the users are mining for cryptocurrency the more money is made.” Mursch said a campaign from September that used the Showtime website to deliver cryptocurrency-mining ads is another example of attackers targeting a video site.

To add insult to injury, the malicious JavaScript in at least some cases was accompanied by graphics that displayed ads for fake AV programs, which scam people out of money and often install malware when they are run.

The above ad was posted on Tuesday. Like the ads analyzed by Trend Micro and posted on social media, it mined Monero coins on behalf of someone with the Coinhive site key of “h7axC8ytzLJhIxxvIHMeC0Iw0SPoDwCK.” It’s not possible to know how many coins the user has generated so far. Trend Micro said the campaign started January 18. In an e-mail sent as this post was going live, a Google representative wrote:

Mining cryptocurrency through ads is a relatively new form of abuse that violates our policies and one that we’ve been monitoring actively. We enforce our policies through a multi-layered detection system across our platforms which we update as new threats emerge. In this case, the ads were blocked in less than two hours and the malicious actors were quickly removed from our platforms.

It wasn’t clear what the representative meant when saying the ads were blocked in less than two hours. Evidence supplied by Trend Micro and on social media showed the ads ran for as long as a week. The representative didn’t respond to follow-up questions seeking clarification and a timeline of when the abusive ads started and ended.

from Ars Technica http://ift.tt/2DG80yV
via IFTTT