Vivobarefoot Smart Shoe could be the future

Vivobarefoot Smart Shoe could be the future

by

– on January 28th, 2018

Vivobarefoot has teamed up with Sensoria, bringing together a premier wearable smart technology company alongside that of a shoe company in order to deliver the Vivobarefoot Smart Shoe. This partnership has produced what is deemed to be the very first IoT-enabled shoe in the world that sports an ultra thin sole so that the foot will be able to do its natural thing. From a concept one year back to reality today, this new shoe will feature a single layer of fabric thin pressure sensors which enables users to record natural movement without having to include any additional underfoot padding or interference. The connected barefoot movement shoe will be available for purchase some time in the second quarter of this year.

Boasting of embedded Sensoria technology in the form of the thinnest pressure sensors in the world, it will be localized in the plantar area. Not only that, these sensors will remain detachable, rechargeable and reusable thanks to the Sensoria Core hardware form factor that will go about its work collecting both data and streams on the user’s mobile phone. The connected devices are smart enough to detect forces, ranging from impact score to foot landing and contact time metrics with extreme precision.

Through this unique method of data collection, it will be able to conjure up a step-by-step natural running transition training plan using artificial intelligence technology in order to deliver real-time audio and visual feedback over a mobile app as well as a web dashboard. The system is able to monitor details such as speed, pace, cadence, GPS track, foot landing technique, ground time, impact score and in due time, asymmetry and toe engagement. Both asymmetry and toe engagement are important metrics that will help monitor natural running technique while reducing the risk of injury.

If you are a natural runner and would like to be empowered by real-time feedback in order to run faster, farther and healthier, then the Vivobarefoot Smart Shoe is the right pair to look out for.

Press Release

from Coolest Gadgets http://ift.tt/2rMoZ0R
via IFTTT

The latest from Elon Musk: Your very own Boring Company flamethrower

Elon Musk promises and Elon Musk delivers, eventually.

For a guy who is going to save the planet with electric cars, Elon Musk sure likes burning fossil fuels. His SpaceX Merlin engines burn RP1, a highly refined form of kerosene, oxidized by liquid oxygen, which takes a lot of energy to make. Don Mackenzie of the Sustainable Transportation Lab ran the numbers and each launch puts out about 640 metric tonnes of CO2, about as much as is saved by 10 Teslas compared to regular cars over their lifetimes.

SpaceX rockets do useful things, like putting satellites and Tesla Roadsters into space. But the same cannot be said by Elon Musk’s latest product — a flamethrower. He promised it back in December in a tweet, and yes, for six hundred bucks, you can soon be the proud owner of an official Boring Company flamethrower.

flamethower ad© The Boring Company

It is evidently up on a password-protected website, and appears to be an Airsoft BB gun converted with some form of propane burner. It seems totally useful and fun for the kids.

from TreeHugger http://ift.tt/2GkPq10
via IFTTT

The Dirty War Over Diversity Inside Google

Fired Google engineer James Damore says he was vilified and harassed for questioning what he calls the company’s liberal political orthodoxy, particularly around the merits of diversity.

Now, outspoken diversity advocates at Google say that they are being targeted by a small group of their coworkers, in an effort to silence discussions about racial and gender diversity.

In interviews with WIRED, 15 current Google employees accuse coworkers of inciting outsiders to harass rank-and-file employees who are minority advocates, including queer and transgender employees. Since August, screenshots from Google’s internal discussion forums, including personal information, have been displayed on sites including Breitbart and Vox Popoli, a blog run by alt-right author Theodore Beale, who goes by the name Vox Day. Other screenshots were included in a 161-page lawsuit that Damore filed in January, alleging that Google discriminates against whites, males, and conservatives.

What followed, the employees say, was a wave of harassment. On forums like 4chan, members linked advocates’ names with their social-media accounts. At least three employees had their phone numbers, addresses, and deadnames (a transgender person’s name prior to transitioning) exposed. Google site reliability engineer Liz Fong-Jones, a trans woman, says she was the target of harassment, including violent threats and degrading slurs based on gender identity, race, and sexual orientation. More than a dozen pages of personal information about another employee were posted to Kiwi Farm, which New York has called “the web’s biggest community of stalkers.”

Meanwhile, inside Google, the diversity advocates say some employees have “weaponized human resources,” by goading them into inflammatory statements, which are then captured and reported to HR for violating Google’s mores around civility or for offending white men.

Engineer Colin McMillen says the tactics have unnerved diversity advocates and chilled internal discussion. “Now it’s like basically anything you say about yourself may end up getting leaked to score political points in a lawsuit,” he says. “I have to be very careful about choosing my words because of the low-grade threat of doxing. But let’s face it, I’m not visibly queer or trans or non-white and a lot of these people are keying off their own white supremacy.”

Targeted employees say they have complained to Google executives about the harassment. They say Google’s security team is vigilant about physical threats and that Danielle Brown, Google’s chief diversity and inclusion officer, who has also been targeted by harassers, has been supportive and reassuring. But, they say they have not been told the outcome of complaints they filed against coworkers they believe are harassing them, and that top executives have not responded assertively to concerns about harassment and doxing. As a result, some employees now check hate sites for attempts at doxing Google employees, which they then report to Google security.

Google declined to respond to questions due to ongoing litigation, but a Google spokesperson said the company has met with every employee who expressed concern.

The complaints underscore how Google’s freewheeling workplace culture, where employees are encouraged to “bring your whole self to work” and exchange views on internal discussion boards, has turned as polarized and toxic as the national political debate.

Aneeta Rattan, an assistant professor of organizational behavior at the London School of Business, says organizations such as Google that want to foster an open environment have to establish norms and rules of engagement around difficult conversations. “They don’t want to have a giant list of things you can’t say,” but they should identify parameters, says Rattan, who has studied prejudice in the workplace and the ability of groups of people to change their minds. “A lot of this is about stoking complex thought, which means everyone will leave somewhat unhappy,” she says. “That is not something all organizations want to foster.”

The politicized tension inside Google echoes the challenge that Silicon Valley tech giants face moderating divisive content on their social-media platforms. Tech companies sold themselves as open and neutral forces for good, espousing free expression both on their corporate campuses and on the internet. But critics say that too often, the social-media sites have become hotbeds of hate speech.

Some anger from the alt-right is now aimed at the tech companies themselves. After Damore’s memo became public in August, a Breitbart headline screamed, “Google’s Social Justice Warriors Create Wrongthink Blacklists.” Earlier this month, James O’Keefe’s Project Veritas posted surreptitious video of Twitter employees discussing the company’s moderation policies.

Yonatan Zunger, a high-ranking veteran engineer who left Google eight months ago, says the internal culture has become a textbook case of the “paradox of tolerance,” the notion that if a society is tolerant without limit, it will be seized upon by the intolerant.

The combatants represent just a sliver of Google’s more than 75,000 employees. Executives seem to want everyone to get back to work, rather than be forced into the awkward position of refereeing a culture war. “Just like they’re reporting me, I’m reporting them as well,” says Alon Altman, a staff engineer and diversity advocate. After Damore’s memo was disclosed in August, Altman says the complaints from both sides amounted to “a denial-of-service attack on human resources.”

Google is an important symbol in Silicon Valley’s struggles with diversity. Damore’s suit claims Google discriminates against whites, males, and conservatives.
At the same time, Google faces a Department of Labor investigation and a private lawsuit from four former employees claiming that it discriminates against women in pay and promotion. The company was the first tech giant to release its diversity numbers in 2014, but has not made significant progress since.

Diversity advocates say that by trying to stay neutral, Google is being exploited by instigators, who have disguised a targeted harassment campaign as conservative political thought.

One flashpoint is Google’s training for employees about sexual, racial, and ethnic diversity. In his memo, Damore said the programs “are highly politicized which further alienates non-progressives.” But one black woman employee offers an opposing complaint. She says the programs lack context about discrimination and inequality and focus on interpersonal relationships, instructing employees to watch what they say because it might hurt someone’s feelings. “It robs Google of the chance to discuss these issues,” and leaves criticisms unanswered, she says. She says co-workers and her manager have described diversity as “just another box to check and a waste of time.”

Zunger, the former employee, says Google managers often are put in an impossible position while trying to resolve disputes. As a consequence, sometimes managers tried to restore calm by telling everyone to knock it off. Zunger says this was well-intentioned, but ultimately counterproductive. “Once an awareness of contempt is present in the room, not talking about it doesn’t make it go away,” he said.

Until her name and face showed up on a website run by Beale, the right-wing provocateur also known as Vox Day, Fong-Jones says she did not appreciate what she was up against. Like many diversity advocates, Fong-Jones serves as an informal liaison between under-represented minorities and management is an unpaid second shift. Over the past few years, she learned to keep a close eye on conversations about diversity issues. It began subtly. Coworkers peppered mailing lists and company town halls with questions: What about meritocracy? Isn’t improving diversity lowering the bar? What about viewpoint diversity? Doesn’t this exclude white men?

Fong-Jones initially assumed that the pushback stemmed from genuine fear or concern. But that changed in August when Damore’s memo, arguing that women are less biologically predisposed to become engineers and leaders, went viral. On Google’s internal communications channels, employees debated Damore’s arguments.

Beale published leaked snippets of a conversation between Fong-Jones and a colleague, where Fong-Jones argued that Damore should not have been allowed to publish his memo on an internal Google site. That fired up Beale. “Google’s SJWs [social-justice warriors] are starting to get nervous as evidence of their internal thought-policing begins to leak out into the public,” Beale wrote. “And never forget, they genuinely believe that they are better-educated, as well as our moral and intellectual superiors, because Google only hires the smartest, best-educated people, right?”

Fong-Jones is used to being harassed online. But she was quickly flooded with direct messages on Twitter containing violent threats and degrading and transphobic slurs based on gender identity, race, and sexual orientation. One commenter on Vox Popoli wrote that, “they should pitch all those sexual freaks off of rooftops.”

That’s when it clicked: perhaps some of her coworkers’ questions had not been in good faith. “We didn’t realize that there was a dirty war going on, and weren’t aware of the tactics being used against us,” she says. The stakes soon became clear. A few days later, alt-right figurehead Milo Yiannopoulos shared an image with his 2.5 million Facebook followers featuring the Twitter bios and profile pics of eight advocates at Google, many of them trans employees.

As the internal debate raged in the wake of Damore’s memo, McMillen says that he knows of at least 10 coworkers who were called into HR for making political statements related to the document, with consequences ranging from verbal warnings to a reduced performance-review score. McMillen was told by HR not to do anything hiring or promotion related for a year. Altman got a verbal warning for writing on an internal board that certain employees should be fired. “I meant only bigoted white men should be fired. They interpreted it as applying to all white men,” Altman says.

The roots of the tension go back years. Former Google engineer Cory Altheide told WIRED that he noticed racist and other hate-filled posts on Google discussion boards before he quit in 2015. In a memo he wrote after leaving the company and circulated this month, he pointed to a post on a blog run by a Google employee that said, “Blacks are not equal to whites. Therefore the ‘inequality’ between these races is expected and makes perfect sense.” WIRED was not able to confirm the identity of the employee.

Some employees see similarities between some of the behavior inside Google and alt-right manuals for fighting advocates for social justice, such as one written by Beale that instructs readers to “Document their every word and action,” “Undermine them, sabotage them, and discredit them,” and “Make the rubble bounce” on your way out the door.

Beale says they’re right. “I know that there are a number of people there who have read [the guide], I know that they’re using it,” Beale told WIRED. He claims to have had contacts inside the company for years and dozens of followers. He says he doesn’t know if Damore has read his guide, but is following the playbook. Damore says he has not read the manual.

The War Within

from Wired Top Stories http://ift.tt/2neQ741
via IFTTT

How Baidu plans to profit from its free autonomous-car technology

Facebook’s experimental chatbot is learning to do small talk

A chatbot trained to engage its partner on personal topics can learn to predict information about the other participant.

Background: Even with AI, chatbots are brittle systems that typically can’t talk about anything outside of what they’ve been trained… Read more

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2Gl7wA6
via IFTTT

Algorithms are making American inequality worse

William Gibson wrote that the future is here, just not evenly distributed. The phrase is usually used to point out how the rich have more access to technology, but what happens when the poor are disproportionately subject to it?

In Automating Inequality, author Virginia Eubanks argues that the poor are the testing ground for new technology that increases inequality. The book, out this week, starts with a history of American poorhouses, which dotted the landscape starting in the 1660s and were around into the 20th century. From there, Eubanks catalogues how the poor have been treated over the last hundred years, before coming to today’s system of social services that increasingly relies on algorithms.

Eubanks leaves no uncertainty as to her position on whether such automation is a good thing. Her thesis is that the punitive and moralistic view of poverty that built the poorhouses never left us, and has been wrapped into today’s automated and predictive decision-making tools. These algorithms can make it harder for people to get services while forcing them to deal with an invasive process of personal data collection. As examples, she profiles three different programs: a Medicaid application process in Indiana, homeless services in Los Angeles, and child protective services in Pittsburgh.

Eubanks spoke to MIT Technology Review about when social services first became automated, her own experience with predictive algorithms, and how these flawed tools give her hope that inequality will be put into such stark relief that we will have to address how we treat our poor, once and for all.

What are the parallels between the poorhouses of the past and what you call today’s digital poorhouses?

These high-tech tools we’re seeing—I call it “the regime of data analytics”—are actually more evolution than revolution. They fit pretty well within the history of poverty policy in the United States.

When I originally started this work, I thought the moment that we’d see these digital tools really arrive in public assistance and public services might be in the 1980s, when there was a widespread uptake of personal computers, or in the 1990s when welfare reform passed. But in fact, they arose in the late 1960s and early 1970s, just as a national welfare rights movement was opening up access to public assistance.

At the same time, there was a backlash against the civil rights movement going on, and a recession. So these elected officials, bureaucrats, and administrators were in this position where the middle-class public was pushing back against the expansion of public assistance. But they could no longer use their go-to strategy of excluding people from the rolls for largely discriminatory reasons. That’s the moment that we see these technologies arrive. What you see is an incredibly rapid decline in the welfare rolls right after they’re integrated into the systems. And that collapse has continued basically until today.

So for some of these algorithms that we have right now, machine-learning tools will replace them. In your research did you come across any issues that are going to arise once we have more AI within these systems?

I don’t know that I have a direct response to it. But one thing I will say is that the Pittsburgh child services system often gets written about as if it’s AI or machine learning. And in fact, it’s actually just a simple statistical regression model.

I do think it’s really interesting, the way we tend to math-wash these systems, that we have a tendency to think they’re more complicated and harder to understand than they actually are. I suspect that there’s a little bit of technological hocus-pocus that happens when these systems come online and people often feel like they don’t understand them well enough to comment on them. But it’s just not true. I think a lot more people that are currently talking about these issues are able to, confident to, and should be at the table when we talk about them.

You have a great quote from a woman on food stamps who tells you her caseworker looks at her purchase history. You appear surprised, so she says, “You should pay attention to what happens to us. You’re next.” Do you have examples of technologies that the general population deals with that are like this example?

I start the book by talking about a case where my partner was attacked and very badly beaten. After he had gotten some major surgery, we were told at the pharmacy when I was trying to pick up his pain meds that we no longer had health insurance. In a panic, I called my insurance company and they told me basically that we were missing a start date for our coverage.

I said, “You know, well, that’s odd because you paid claims that we made a couple of weeks ago, so we must have had a start date at that point.” And they said, “Oh, it must have just been a technical error. Somebody must have accidentally erased your start date or something.”

I was really suspicious that what was actually going on was that they had suspended our coverage while they investigated us for fraud (I had been working on these kinds of fraud detection tools for a long time by then). And we had some of the most common indicators that insurance fraud was occurring: we had only had our insurance for a couple of days before the attack, we are not married, and he had received controlled substances to help him manage his pain.

I will never know whether we were being investigated, but either way they were telling us that we owed upward of $60,000 in medical bills that had been denied because we weren’t covered when the claims went through. It caused extraordinary stress.

So these systems are actually already at work, sort of invisibly, in many of the services that we interact with on a day-to-day basis, whether we are poor, working class or professional middle class, or economic elites. But they don’t affect us all equally. My partner and I were able to endure that experience because we had resources to help us get through that experience, and also because it only happened to us once. It wasn’t coming from every direction. It wasn’t an overwhelming force where we’re hearing it from child protective services, and also Medicaid, and also food stamps, and also the police.

I think it can be a lot harder for folks who are dealing with many of these systems at the same time.

Is there anything good happening because of these tools?

One of the reasons I’m optimistic is that these systems are also really incredible diagnostics. They make inequities in our country really concrete, and really evident. Where one of the systems goes spiraling out of control is a place where we have a deep inequality that needs to be addressed. And so I believe that the combination of the movement work that’s already happening now and increased attention to systems like these can really create incredible pressure to create a more just social system overall.

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2rKzNMS
via IFTTT

This drone learned to fly through streets by studying driverless-car data

Facebook’s experimental chatbot is learning to do small talk

A chatbot trained to engage its partner on personal topics can learn to predict information about the other participant.

Background: Even with AI, chatbots are brittle systems that typically can’t talk about anything outside of what they’ve been trained… Read more

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2neu1Oo
via IFTTT