When Nintendo Labo launches this April, it will come with a feature called Toy Con Garage that lets you use rudimentary programming to build and customize your own cardboard robots, Nintendo announced today. Some of the custom toys Nintendo showed off included an electric guitar and a basic game of electronic tennis.
At an event in New York this afternoon, Nintendo representatives demonstrated this Toy Con Garage, which uses simple “building blocks†to let you program your devices. They’re essentially “if-then†statements. When you open up the program, you can select from a number of blocks based on input options for your Switch’s controllers, then connect them to other blocks based on output options. For example, you can connect the left Switch controller’s up button (input) to the right Switch controller’s vibration feature (output), so whenever you press up on the left Joy Con, the right Joy Con will buzz.
This is how Nintendo Labo users will be able to expand beyond the six types of cardboard creations included in the Variety Set or the one included in the Robot Set. Instead of making a piano, you can make a guitar. Instead of making a toy car, you can build a little cardboard man who falls flat on his face. You can mix and match different programs’ functionality—using the fishing rod to play music, for example—and you can even add extra Joy Cons to build even more elaborate programs.
Nintendo would not allow attendants to take pictures or videos of Toy Con Garage, although saw a video at the event that will likely be put on Nintendo’s YouTube channel later. The company did show off a few seconds of this feature during the original Nintendo Labo reveal a few weeks ago:
The building blocks look like that.
We’ll have more on Nintendo Labo, including hands-on impressions and videos, in the very near future. The wild new cardboard toys come out on April 20.
To hear Andrew Przybylski tell it, the American 2016 presidential election is what really inflamed the public’s anxiety over the seductive power of screens. (A suspicion that big companies with opaque inner workings are influencing your thoughts and actions will do that.) “Psychologists and sociologists have obviously been studying and debating about screens and their effects for years,” says Przybylski, who is himself a psychologist at the Oxford Internet Institute with more than a decade’s experience studying the impact of technology. But society’s present conversation—”chatter,” he calls it—can be traced back to three events, beginning with the political race between Hillary Clinton and Donald Trump.
Then there were the books. Well-publicized. Scary-sounding. Several, really, but two in particular. The first, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, by NYU psychologist Adam Alter, was released March 2, 2017. The second, iGen: Why Today’s Super-Connected Kids are Growing Up Less Rebellious, More Tolerant, Less Happy – and Completely Unprepared for Adulthood – and What That Means for the Rest of Us, by San Diego State University psychologist Jean Twenge, hit stores five months later.
Last came the turncoats. Former employees and executives from companies like Facebook worried openly to the media about the monsters they helped create. Tristan Harris, a former product manager at Google and founder of the nonprofit “Time Well Spent” spoke with this publication’s editor in chief about how Apple, Google, Facebook, Snapchat, Twitter, Instagram—you know, everyone—design products to steal our time and attention.
Bring these factors together, and Przybylski says you have all the ingredients necessary for alarmism and moral panic. What you’re missing, he says, is the only thing that matters: direct evidence.
Which even Alter, the author of that first bellwether book, concedes. “There’s far too little evidence for many of the assertions people make,” he says. “I’ve become a lot more careful with what I say, because I felt the evidence was stronger when I first started speaking about it.”
“People are letting themselves get played,” says Przybylski. “It’s a bandwagon.” So I ask him: When WIRED says that technology is hijacking your brain, and the New York Timessays it’s time for Apple to design a less addictive iPhone, are we part of the problem? Are we all getting duped?
“Yeah, you are,” he says. You absolutely are.”
Of course, we’ve been here before. Anxieties over technology’s impact on society are as old as society itself; video games, television, radio, the telegraph, even the written word—they were all, at one time, scapegoats or harbingers of humanity’s cognitive, creative, emotional, and cultural dissolution. But the apprehension over smartphones, apps, and seductive algorithms is different. So different, in fact, that our treatment of past technologies fails to be instructive.
A better analogy is our modern love-hate relationship with food. When grappling with the promises and pitfalls of our digital devices, it helps to understand the similarities between our technological diets and our literal ones.
Today’s technology is always with you; a necessary condition, increasingly, of existence itself. These are some of the considerations that led MIT sociologist Sherry Turkle to suggest avoiding the metaphor of addiction, when discussing technology. “To combat addiction, you have to discard the addicting substance,” Turkle wrote in her 2011 book Alone Together: Why We Expect More from Technology and Less from Each Other. “But we are not going to ‘get rid’ of the Internet. We will not go ‘cold turkey’ or forbid cell phones to our children. We are not going to stop the music or go back to the television as the family hearth.”
Food addicts—who speak of having to take the “tiger of addiction” out of the cage for a walk three times a day—might take issue with Turkle’s characterization of dependence. But her observation, and the food addict’s plight, speak volumes about our complicated relationships with our devices and the current state of research.
People from all backgrounds use technology—and no two people use it exactly the same way. “What that means in practice is that it’s really hard to do purely observational research into the effects of something like screen time, or social media use,” says MIT social scientist Dean Eckles, who studies how interactive technologies impact society’s thoughts and behaviors. You can’t just divide participants into, say, those with phones and those without. Instead, researchers have to compare behaviors between participants while accounting for variables like income, race, and parental education.
Say, for example, you’re trying to understand the impact of social media on adolescents, as Jean Twenge, author of the iGen book, has. When Twenge and her colleagues analyzed data from two nationally representative surveys of hundreds of thousands of kids, they calculated that social media exposure could explain 0.36 percent of the covariance for depressive symptoms in girls.
But those results didn’t hold for the boys in the dataset. What’s more, that 0.36 percent means that 99.64 percent of the group’s depressive symptoms had nothing to do with social media use. Przybylski puts it another way: “I have the data set they used open in front of me, and I submit to you that, based on that same data set, eating potatoes has the exact same negative effect on depression. That the negative impact of listening to music is 31 times larger than the effect of social media.”
In datasets as large as these, it’s easy for weak correlational signals to emerge from the noise. And a correlation tells us nothing about whether new-media screen time actually causes sadness or depression. Which are the same problems scientists confront in nutritional research, much of which is based on similarly large, observational work. If a population develops diabetes but surveys show they’re eating sugar, drinking alcohol, sipping out of BPA-laden straws, and consuming calories to excess, which dietary variable is to blame? It could just as easily be none or all of the above.
Decades ago, those kinds of correlational nutrition findings led people to demonize fat, pinning it as the root cause of obesity and chronic illness in the US. Tens of millions of Americans abolished it from their diets. It’s taken a generation for the research to boomerang back and rectify the whole baby-bathwater mistake. We risk similar consequences, as this new era of digital nutrition research gets underway.
Fortunately, lessons learned from the rehabilitation of nutrition research can point a way forward. In 2012, science journalist Gary Taubes and physician-researcher Peter Attia launched a multimillion-dollar undertaking to reinvent the field. They wanted to lay a new epistemological foundation for nutrition research, investing the time and money to conduct trials that could rigorously establish the root causes of obesity and its related diseases. They called their project the Nutrition Science Initiative.
Today, research on the link between technology and wellbeing, attention, and addiction finds itself in need of similar initiatives. They need randomized controlled trials, to establish stronger correlations between the architecture of our interfaces and their impacts; and funding for long-term, rigorously performed research. “What causes what? Is it that screen time leads to unhappiness or unhappiness leads to screen time?” says Twenge. “So that’s where longitudinal studies come in.” Strategies from the nascent Open Science Framework—like the pre-registration of studies, and data sharing—could help, too.
But more than any of that, researchers will need buy-in from the companies that control that data. Ours is a time of intense informational asymmetry; the people best equipped to study what’s happening—the people who very likely are studying what’s happening—are behind closed doors. Achieving balance will require openness and objectivity from those who hold the data; clear-headed analysis from those who study it; and measured consideration by the rest of us.
“Don’t get me wrong, I’m concerned about the effects of technology. That’s why I spend so much of my time trying to do the science well,” Przybylski says. He says he’s working to develop a research proposal strategy by which scientists could apply to conduct specific, carefully designed studies with proprietary data from major platforms. Proposals would be assessed by independent reviewers outside the control of Facebook etc. If the investigation shows the potential to answer an important question in a discipline, or about a platform, the researchers outside the company get paired with the ones inside.
“If it’s team based, collaborative, and transparent, it’s got half a chance in hell of working,” Przybylski says.
And if we can avoid the same mistakes that led us to banish fat from our food, we stand a decent chance of keeping our technological diets balanced and healthy.
Your Technology and You
Wired’s editor-in-chief Nick Thompson spoke with Tristan Harris, the prophet behind the “Time Well Spent” movement that argues are minds are being hijacked by the technology we use.
One writer takes us through his extreme digital detox, a retreat that took him offline for a whole month.
Technology is demonized for making us distractible, but the right tech can help us form new, better digital habits—like these ones.
from Wired Top Stories http://ift.tt/2DTNgHU
via IFTTT
Depending on who you ask, blockchains are either the most important technological innovation since the internet or a solution looking for a problem.
The original blockchain is the decentralized ledger behind the digital currency bitcoin. The ledger consists of linked batches of transactions known as blocks (hence the term blockchain), and an identical copy is stored on each of the roughly 200,000 computers that make up the bitcoin network. Each change to the ledger is cryptographically signed to prove that the person transferring virtual coins is the actual owner of those coins. But no one can spend their coins twice, because once a transaction is recorded in the ledger, every node in the network will know about it.
Who paved the way for blockchains?
DigiCash (1989)
DigiCash was founded by David Chaum to create a digital-currency system that enabled users to make untraceable, anonymous transactions. It was perhaps too early for its time. It went bankrupt in 1998, just as ecommerce was finally taking off.
E-Gold (1996)
E-gold was a digital currency backed by real gold. The company was plagued by legal troubles, and its founder Douglas Jackson eventually pled guilty to operating an illegal money-transfer service and conspiracy to commit money laundering.
B-Money and Bit-Gold (1998)
Cryptographers Wei Dai (B-money) and Nick Szabo (Bit-gold) each proposed separate but similar decentralized currency systems with a limited supply of digital money issued to people who devoted computing resources.
Ripple Pay (2004)
Now a cryptocurrency, Ripple started out as a system for exchanging digital IOUs between trusted parties.
Reusable Proofs of Work (RPOW) (2004)
RPOW was a prototype of a system for issuing tokens that could be traded with others in exchange for computing intensive work. It was inspired in part by Bit-gold and created by bitcoin’s second user, Hal Finney.
The idea is to both keep track of how each unit of the virtual currency is spent and prevent unauthorized changes to the ledger. The upshot: No bitcoin user has to trust anyone else, because no one can cheat the system.
Other digital currencies have imitated this basic idea, often trying to solve perceived problems with bitcoin by building new cryptocurrencies on new blockchains. But advocates have seized on the idea of a decentralized, cryptographically secure database for uses beyond currency. Its biggest boosters believe blockchains can not only replace central banks but usher in a new era of online services outside the control of internet giants like Facebook and Google. These new-age apps would be impossible to censor, advocates say, and would be more answerable to users.
Several companies are already taking advantage of the Ethereum platform, initially built for a virtual currency. The startup Storj offers a file-storage service, banking on the idea that distributing files across a decentralized network is safer than putting all your files in one cabinet.
Meanwhile, despite the fact that bitcoin was originally best known for enabling illicit drug sales over the internet, blockchains are finding acceptance in some of the world’s largest companies. Some big financial services companies, including JP Morgan and the Depository Trust & Clearing Corporation, are experimenting with blockchains and blockchain-like technologies to improve the efficiency of trading stocks and other assets. Traders buy and sell stocks rapidly, but the behind-the-scenes process of transferring ownership of those assets can take days. Some technologists believe blockchains could help with that.
There are also potential applications for blockchains in the seemingly boring world of corporate compliance. After all, storing records in an immutable ledger is a pretty good way to assure auditors that those records haven’t been tampered with.
It’s too early to say which experiments will work out or whether the results of successful experiments will resemble the bitcoin blockchain. But the idea of creating tamper-proof databases has captured the attention of everyone from anarchist techies to staid bankers.
The First Blockchain
The original bitcoin software was released to the public in January 2009. It was open source software, meaning anyone could examine the code and reuse it. And many have. At first, blockchain enthusiasts sought to simply improve on bitcoin. Litecoin, another virtual currency based on the bitcoin software, seeks to offer faster transactions.
One of the first projects to repurpose the bitcoin code to use it for more than currency was Namecoin, a system for registering “.bit” domain names. The traditional domain-name management system—the one that helps your computer find our website when you type wired.com—depends on a central database, essentially an address book for the internet. Internet-freedom activists have long worried that this traditional approach makes censorship too easy, because governments can seize a domain name by forcing the company responsible for registering it to change the central database. The US government has done this several times to shut sites accused of violating gambling or intellectual-property laws.
Namecoin tries to solve this problem by storing .bit domain registrations in a blockchain, which theoretically makes it impossible for anyone without the encryption key to change the registration information. To seize a .bit domain name, a government would have to find the person responsible for the site and force them to hand over the key.
What’s an “ICO”?
Ethereum and other blockchain-based projects have raised funds through a controversial practice called an “initial coin offering,” or ICO: The creators of new digital currencies sell a certain amount of the currency, usually before they’ve finished the software and technology that underpins it. The idea is that investors can get in early while giving developers the funds to finish the tech.The catch is that these offerings have traditionally operated outside the regulatory framework meant to protect investors, although that’s starting to change as more governments examine the practice.
Bitcoin’s software wasn’t designed to handle other types of applications. In 2013, a startup called Ethereum published a paper outlining an idea that promised to make it easier for coders to create their own blockchain-based software without having to start from scratch, without relying on the original bitcoin software. In 2015 the company released its platform for building “smart contracts,†software applications that can enforce an agreement without human intervention. For example, you could create a smart contract to bet on tomorrow’s weather. You and your gambling partner would upload the contract to the Ethereum network and then send a little digital currency, which the software would essentially hold in escrow. The next day, the software would check the weather and then send the winner their earnings. At least two major “prediction markets” have been built on the platform, enabling people to bet on more interesting outcomes, such as which political party will win an election.
So long as the software is written correctly, there’s no need to trust anyone in these transactions. But that turns out to be a big catch. In 2016 a hacker made off with about $50 million worth of Ethereum’s custom currency intended for a democratized investment scheme where investors would pool their money and vote on how to invest it. A coding error allowed a still unknown person to make off with the virtual cash. Lesson: It’s hard to remove humans from transactions, with or without a blockchain.
Even as cryptography geeks plotted to use blockchains to topple, or at least bypass, big banks, the financial sector began its own experiments with blockchains. In 2015, some of the largest financial institutions in the world, including JP Morgan, the Bank of England, and the Depository Trust & Clearing Corporation (DTCC), announced that they would collaborate on open source blockchain software under the name Hyperledger. Several pieces of software have been released under the Hyperledger umbrella, including Sawtooth, created by Intel for building custom blockchains.
The industry is already experimenting with using blockchains to make security trades more efficient. Nasdaq OMX, the company behind the Nasdaq stock exchange, began allowing private companies to use blockchains to manage shares in 2015, starting with a company called Chain. Similarly, the Australian Securities Exchange announced a deal to use blockchain technology from a Goldman Sachs-backed startup called Digital Asset Holdings to power the post-trade processes of Australia’s equity market.
The Future of Blockchain
Despite the blockchain hype—and many experiments—there’s still no “killer app” for the technology beyond currency speculation. And while auditors might like the idea of immutable records, as a society we don’t always want records to be permanent.
Blockchain proponents admit that it could take a while for the technology to catch on. After all, the internet’s foundational technologies were created in the 1960s, but it took decades for the internet to become ubiquitous.
That said, the idea could eventually show up in lots of places. For example, your digital identity could be tied to a token on a blockchain. You could then use that token to log in to apps, open bank accounts, apply for jobs, or prove that your emails or social-media messages are really from you. Future social networks might be built on connected smart contracts that show your posts only to certain people or enable people who create popular content to be paid in cryptocurrencies. Perhaps the most radical idea is using blockchains to handle voting. The team behind the open source project Soverign built a platform that organizations, companies, and even governments can already use to gather votes on a blockchain.
Advocates believe blockchains can help automate many tasks now handled by lawyers or other professionals. For example, your will might be stored in a blockchain. Or perhaps your will could be a smart contract that will automatically dole out your money to your heirs. Or maybe blockchains will replace notaries.
It’s also entirely possible that blockchains will evolve into something completely different. Many of the financial industry’s experiments involve “private” blockchains that run on servers within a single company and selected partners. In contrast, anyone can run bitcoin or Ethereum software on their computer and view all of the transactions recorded on the networks’ respective blockchains. But big companies prefer to keep their data in the hands of a few employees, partners, and perhaps regulators.
Bitcoin proved that it’s possible to build an online service that operates outside the control of any one company or organization. The task for blockchain advocates now is proving that that’s actually a good thing.
Learn More
This guide was last updated on January 31, 2018.
from Wired Top Stories http://ift.tt/2BJC2zK
via IFTTT
Artificial intelligence is overhyped—there, we said it. It’s also incredibly important.
Superintelligent algorithms aren’t about to take all the jobs or wipe out humanity. But software has gotten significantly smarter of late. It’s why you can talk to your friends as an animated poop on the iPhone X using Apple’s Animoji, or ask your smart speaker to order more paper towels.
Tech companies’ heavy investments in AI are already changing our lives and gadgets, and laying the groundwork for a more AI-centric future.
The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training†computers to perform tasks based on examples, rather than by relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.
For most of us, the most obvious results of the improved powers of AI are neat new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is also poised to reinvent other areas of life. One is health care. Hospitals in India are testing software that checks images of a person’s retina for signs of diabetic retinopathy, a condition frequently diagnosed too late to prevent vision loss. Machine learning is vital to projects in autonomous driving, where it allows a vehicle to make sense of its surroundings.
There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.
The Beginnings of Artificial Intelligence
Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. “We think that a significant advance can be made,†he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.â€
Moments that Shaped AI
1956
The Dartmouth Summer Research Project on Artificial Intelligence coins the name of a new field concerned with making software smart like humans.
1965
Joseph Weizenbaum at MIT creates Eliza, the first chatbot, which poses as a psychotherapist.
1975
Meta-Dendral, a program developed at Stanford to interpret chemical analyses, makes the first discoveries by a computer to be published in a refereed journal.
1987
A Mercedes van fitted with two cameras and a bunch of computers drives itself 20 kilometers along a German highway at more than 55 mph, in an academic project led by engineer Ernst Dickmanns.
The Pentagon stages the Darpa Grand Challenge, a race for robot cars in the Mojave Desert that catalyzes the autonomous-car industry.
2012
Researchers in a niche field called deep learning spur new corporate interest in AI by showing their ideas can make speech and image recognition much more accurate.
Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.
Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.
As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.
Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.
Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.†But neural networks tumbled from favor after an influential 1969 book co-authored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.
Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.
In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to publish results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find.
The Future of Artificial Intelligence
Even if progress on making artificial intelligence smarter stops tomorrow, don’t expect to stop hearing about how it’s changing the world.
Big tech companies such as Google, Microsoft, and Amazon have amassed strong rosters of AI talent and impressive arrays of computers to bolster their core businesses of targeting ads or anticipating your next purchase.
They’ve also begun trying to make money by inviting others to run AI projects on their networks, which will help propel advances in areas such as health care or national security. Improvements to AI hardware, growth in training courses in machine learning, and open source machine-learning projects will also accelerate the spread of AI into other industries.
Your AI Decoder Ring
Artificial intelligence
The development of computers capable of tasks that typically require human intelligence.
Machine learning
Using example data or experience to refine how computers make predictions or perform a task.
Deep learning
A machine learning technique in which data is filtered through self-adjusting networks of math loosely inspired by neurons in the brain.
Supervised learning
Showing software labeled example data, such as photographs, to teach a computer what to do.
Unsupervised learning
Learning without annotated examples, just from experience of data or the world—trivial for humans but not generally practical for machines. Yet.
Reinforcement learning
Software that experiments with different actions to figure out how to maximize a virtual reward, such as scoring points in a game.
Artificial general intelligence
As yet nonexistent software that displays a humanlike ability to adapt to different environments and tasks, and transfer knowledge between them.
Meanwhile, consumers can expect to be pitched more gadgets and services with AI-powered features. Google and Amazon in particular are betting that improvements in machine learning will make their virtual assistants and smart speakers more powerful. Amazon, for example, has devices with cameras to look at their owners and the world around them.
The commercial possibilities make this a great time to be an AI researcher. Labs investigating how to make smarter machines are more numerous and better-funded than ever. And there’s plenty to work on: Despite the flurry of recent progress in AI and wild prognostications about its near future, there are still many things that machines can’t do, such as understanding the nuances of language, common-sense reasoning, and learning a new skill from just one or two examples. AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans. One deep-learning pioneer, Google’s Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.
As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Facebook have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or black people. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on the safety and ethics of AI. For us to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.
Learn More
What The AI Behind AlphaGo Can Teach Us About Being Human Drama, emotion, server racks, and existential questions. Find them all in our on-the-scene account from the triumph of Google’s Go-playing bot over top player Lee Sedol in South Korea.
John McCarthy, Father Of AI And Lisp, Dies At 84 WIRED’s 2011 obituary of the man who coined the term artificial intelligence gives a sense of the origins of the field. McCarthy’s lasting, and unfulfilled, dream of making machines as smart as humans still entrances many people working on AI today.
Are We Ready for Intimacy With Androids? People have always put themselves into their technological creations—but what happens when those artificial creations look and act just like people? Hiroshi IshiÂguro builds androids on a quest to reverse engineer how humans form relationships. His progress may provide a preview of issues we’ll encounter as AI and robotics evolve.
When it Comes to Gorillas, Google Photos Remains Blind The limitations of AI systems can be as important as their capabilities. Despite improvements in image recognition over recent years, WIRED found Google still doesn’t trust its algorithms not to mix up apes and black people.
Artificial Intelligence Seeks an Ethical Conscience As companies and governments rush to embrace ever-more powerful AI, researchers have begun to ponder ethical and moral questions about the systems they build, and how they’re put to use.
A ‘Neurographer’ Puts The Art In Artificial Intelligence Some artists are repurposing the AI techniques tech companies use to process images into a new creative tool. Mario Klingemann’s haunting images, for example, have been compared to the paintings of Francis Bacon.
from Wired Top Stories http://ift.tt/2BKHVwC
via IFTTT
Bitcoin is a digital currency. Like other currencies, you can use it to buy things from merchants that accept it, such as Overstock.com, or, as is more often the case, hold on to it in hopes that it will increase in value. Unlike traditional currencies, which rely on governments and central banks, no single entity controls bitcoin. Rather, it is supervised by a worldwide network of volunteers who maintain computers running specialized software. As long as people run bitcoin software, the currency will keep working, because everything needed to keep it working is stored in a distributed ledger called the blockchain. And even though it’s all digital, bitcoin is scarce.
Its most wild-eyed proponents believe bitcoin’s decentralized, cryptographic approach to currency can yield a host of benefits: limiting central bankers’ ability to damage economies by printing too much money; eliminating credit-card fraud; bringing the unbanked masses into the modern economy; giving people in unstable economies a safe place to park their money; and making it cheap and easy to transfer funds. But bitcoin has yet to realize these goals, and critics argue it may never live up to the hype.
When you send or receive bitcoin, your bitcoin software, referred to as a “wallet,†records the transaction in the blockchain. The blockchain is maintained by, and distributed across, the roughly 200,000 computers running bitcoin software. If someone tries to alter the ledger to make it look like they have more bitcoin than they’re supposed to, the tampering will be apparent because it won’t match the other copies of the blockchain.
People who commit the computing resources to processing bitcoin transactions are paid in bitcoin, but only if the computers they operate are first to complete complex cryptographic puzzles in a process called “mining.†New bitcoins are created automatically by the software and awarded to the winners of the race to solve these puzzles. As of February 2018, that award is 12.5 bitcoins. By design, only 21 million bitcoins will ever be created. Those who process transactions can also collect fees; the fees are optional and set by the person who initiates a transaction. The larger the fee, the faster the transaction will likely be completed. This system keeps bitcoin scarce while rewarding people for investing in the infrastructure required to keep a global payment-processing system running. But the mining process comes with a big catch: It uses an enormous amount of electricity.
Adoption of the cryptocurrency has been hobbled by a series of scandals, high-tech heists, and disputes over the software’s design, all of which illustrate why financial regulations were created in the first place. The bitcoin community has solved some mind-boggling technological problems. But making bitcoin a true replacement for, or even adjunct to, the global financial system requires more than just great tech.
The History of Bitcoin
On Halloween 2008, someone using the name Satoshi Nakamoto sent an email to a crytography mailing list with a link to an academic paper about peer-to-peer currency. It didn’t make much of a splash. Nakamoto was unknown in cryptography circles, and other cryptographers had proposed similar schemes before. Two months later, however, Nakamoto announced the first release of bitcoin software, proving it was more than just an idea. Anyone could download the software and start using it. And people did.
In the early days, bitcoin was used almost exclusively by cryptography geeks. A bitcoin sold for less than a penny. But the idea slowly caught on. Bitcoin emerged in the aftermath of the 2008 financial crisis when some people—especially free-market libertarians—worried the Federal Reserve’s attempts to increase the money supply would lead to runaway inflation.
Nakamoto disappeared from the internet before bitcoin attracted much mainstream attention. He handed control of the project to an early contributor named Gavin Andresen in December 2010 and quit posting to the public bitcoin forum. To this day, Nakamoto’s identity remains a mystery.
No one knows who the creator of bitcoin really is. These are a few of the suspects.
The value of a bitcoin first hit $1 shortly after this transition, in February 2011. Then the price jumped to $29.60 in June 2011 after a Gawker story about the now-defunct black-market site Silk Road, where users could use bitcoin to pay for illegal drugs. But the price fell again after Mt. Gox, the most popular site at the time for buying bitcoin with traditional currency and storing them online, was hacked and temporarily went offline.
The price fluctuated over the next few years, soaring after a financial crisis in Cyprus in 2013, and sinking after Mt. Gox went bankrupt in 2014. But the overall trajectory was up. By January 2017, bitcoin was trading at nearly $1,000. The price soared in 2017, reaching an all-time high of nearly $20,000 in December. The reasons for this rally are unclear, but it seems to have been driven by a mixture of wild speculation and regulatory changes (the US approved trading bitcoin futures on major exchanges in December).
Bitcoin’s price surged despite discord among its adherents over the currency’s future. Many prominent members of the bitcoin community, including Andresen, who handed control of the software to Dutch coder Wladimir van der Laan in 2014, believe bitcoin transactions are too slow and too expensive. Although transaction fees are optional, failing to include a high enough fee could mean your transaction won’t be processed for hours or days. In December 2017, transaction fees averaged $20 to $30, according to the site BitInfoCharts. That makes bitcoin impractical for many daily transactions, such as buying lunch.
Developers have proposed technical solutions for this problem. But the plan favored by Andresen and company would require bitcoin users to switch to a new version of the software, and so far miners have been reluctant to do so. That’s led to the creation of several alternate versions of the bitcoin software, known as “hard forks,” each competing to lure both miners and users away from official version. Some, like Bitcoin Cash, have attracted miners and investors, but none is close to displacing the original. Meanwhile, many other “cryptocurrencies” have emerged, borrowing heavily from the core ideas behind bitcoin but with many differences (see The WIRED Guide to Blockchain).
What’s Next for Bitcoin
The future of bitcoin depends on three major questions. First, whether any of the hard forks or the hundreds of competing cryptocurrencies will supplant it, and, if so, when. Second, whether the sky-high valuations can last. And third, whether bitcoins will ever be used as currency for day-to-day transactions. The answer to the third question hinges in large part on the first two.
One thing holding bitcoin back as a currency is the expense and time lag involved in processing transactions. Emin Gun Sirer, a professor and cryptography researcher at Cornell University, estimates that the bitcoin network typically processes a little more than three transactions per second. By comparison, the Visa credit-card network processes around 3,674 transactions per second. Worse, bitcoin transaction confirmations can take hours or even days.
The First Real-World Bitcoin Transaction
There were few places to spend bitcoin during its early years, before the black markets that made the currency famous emerged. The first time someone actually used bitcoin to buy something is widely considered to have been May 22, 2010. Programmer Laszlo Hanyecz paid 10,000 bitcoin (worth around $41 at the time) to have two pizzas delivered to his house. Those 10,000 bitcoin are worth millions now. “I don’t feel bad about it,†Hanyecz told WIRED in 2011, when the coins would have sold for $272,329. “The pizza was really good.â€
In addition to the hard forks of bitcoin, there are now countless alternative cryptocurrencies, sometimes called “alt-coins,†that aim to solve some of bitcoin’s shortcomings. Litecoin, for example, is designed to process transactions more quickly than bitcoin, while Monero focuses on creating a more private alternative. None trade for as much as bitcoin, but several sell for hundreds of dollars.
If one of the bitcoin variants or alternatives can solve its main problems, and win over users and miners, that currency would become much more suitable for day-to-day use. It’s also possible that the developers behind the official version of bitcoin will find a way to make the network cheaper and faster while maintaining compatibility with old versions of the software. The maintainers of the original bitcoin software platform are working on a solution called the “Lightning Network†that would shift many transactions to “private channels,†to boost speed and reduce costs.
And then there’s the environmental impact. Critics argue that mining bitcoin is an enormous waste of electricity because they don’t have any intrinsic value.
Even if the technical issues of cost and performance are solved, there’s still the question of volatility. Businesses and consumers can exchange dollars for goods and services with the confidence that those dollars will be worth the same amount in three weeks when the rent is due. But bitcoin has proven far more volatile than most other assets, according to a study conducted by the bitcoin wallet company Coinbase. For example, On November 29, bitcoin surged from just under $10,000 to well over $11,000 before sinking back to about where it started the day.
The founders of Coinbase have argued that derivative markets could help users cope with the volatility by allowing participants to essentially buy insurance that pays out if the price of bitcoin drops. That might not reduce the volatility, but it might reduce the risk of accepting bitcoin as payment. In 2017, US regulators cleared the Chicago Mercantile Exchange and the Chicago Board Options Futures Exchange, the world’s largest derivatives exchanges, to offer bitcoin futures. But it’s too early to tell if it will make bitcoin more acceptable to retailers.
Bitcoin has come an enormous way since its origins as a paper by a pseudonymous author. But it still has a long way to go to fulfill its creator’s dream.
Bitcoin Is Splitting in Two. Now What? A deeper dive on why some bitcoin community leaders want to switch to new, more efficient, versions of the software, and their struggle to win over miners and users.
The Hard Math Behind Bitcoin’s Global Warming Problem A 2017 report estimated that bitcoin uses more electricity than the country of Serbia. Here we explain how bitcoin mining works, why it uses so much energy, and why that’s so hard to change.
The Rise and Fall of Silk Road, part 1 and part 2 Bitcoin isn’t always, or even primarily, used for shady purposes. But the online, illegal drug marketplace Silk Road is what put it on the map.
The Inside Story of Mt. Gox, Bitcoin’s $460 Million Disaster Mt. Gox’s bankruptcy caused the first major bitcoin crash and served as a hard reminder that banks are regulated and insured for a reason. This is the Mt. Gox story, from its beginnings as a planned Magic: The Gathering card-trading site to its emergence as the biggest bitcoin trading platform to its downfall.
‘I Forgot My PIN’: An Epic Tale of Losing $30,000 in Bitcoin Mark Frauenfelder forgot the PIN for his digital bitcoin wallet. The story of recovering his $30,000 worth of cryptocurrency illustrates both the perils of a decentralized network where no one can reset your passwords.
from Wired Top Stories http://ift.tt/2nxm3A4
via IFTTT
In early 2014, Srikanth Thirumalai met with Amazon CEO Jeff Bezos. Thirumalai, a computer scientist who’d left IBM in 2005 to head Amazon’s recommendations team, had come to propose a sweeping new plan for incorporating the latest advances in artificial intelligence into his division.
He arrived armed with a “six-pager.†Bezos had long ago decreed that products and services proposed to him must be limited to that length, and include a speculative press release describing the finished product, service, or initiative. Now Bezos was leaning on his deputies to transform the company into an AI powerhouse. Amazon’s product recommendations had been infused with AI since the company’s very early days, as had areas as disparate as its shipping schedules and the robots zipping around its warehouses. But in recent years, there has been a revolution in the field; machine learning has become much more effective, especially in a supercharged form known as deep learning. It has led to dramatic gains in computer vision, speech, and natural language processing.
In the early part of this decade, Amazon had yet to significantly tap these advances, but it recognized the need was urgent. This era’s most critical competition would be in AI—Google, Facebook, Apple, and Microsoft were betting their companies on it—and Amazon was falling behind. “We went out to every [team] leader, to basically say, ‘How can you use these techniques and embed them into your own businesses?’†says David Limp, Amazon’s VP of devices and services.
Thirumalai took that to heart, and came to Bezos for his annual planning meeting with ideas on how to be more aggressive in machine learning. But he felt it might be too risky to wholly rebuild the existing system, fine-tuned over 20 years, with machine-learning techniques that worked best in the unrelated domains of image and voice recognition. “No one had really applied deep learning to the recommendations problem and blown us away with amazingly better results,†he says. “So it required a leap of faith on our part.†Thirumalai wasn’t quite ready—but Bezos wanted more. So Thirumalai shared his edgier option of using deep learning to revamp the way recommendations worked. It would require skills that his team didn’t possess, tools that hadn’t been created, and algorithms that no one had thought of yet. Bezos loved it (though it isn’t clear whether he greeted it with his trademark hyena-esque laugh), so Thirumalai rewrote his press release and went to work.
Thirumalai was only one of a procession of company leaders who trekked to Bezos a few years ago with six-pagers in hand. The ideas they proposed involved completely different products with different sets of customers. But each essentially envisioned a variation of Thirumalai’s approach: transforming part of Amazon with advanced machine learning. Some of them involved rethinking current projects, like the company’s robotics efforts and its huge data-center business, Amazon Web Services (AWS). Others would create entirely new businesses, like a voice-based home appliance that would become the Echo.
The results have had an impact far beyond the individual projects. Thirumalai says that at the time of his meeting, Amazon’s AI talent was segregated into isolated pockets. “We would talk, we would have conversations, but we wouldn’t share a lot of artifacts with each other because the lessons were not easily or directly transferable,†he says. They were AI islands in a vast engineering ocean. The push to overhaul the company with machine learning changed that.
While each of those six-pagers hewed to Amazon’s religion of “single-threaded†teams—meaning that only one group “owns†the technology it uses—people started to collaborate across projects. In-house scientists took on hard problems and shared their solutions with other groups. Across the company, AI islands became connected. As Amazon’s ambition for its AI projects grew, the complexity of its challenges became a magnet for top talent, especially those who wanted to see the immediate impact of their work. This compensated for Amazon’s aversion to conducting pure research; the company culture demanded that innovations come solely in the context of serving its customers.
Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.
It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.
“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’†says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.â€
Maybe the force.
The Alexa Effect
The flagship product of Amazon’s push into AI is its breakaway smart speaker, the Echo, and the Alexa voice platform that powers it. These projects also sprang from a six-pager, delivered to Bezos in 2011 for an annual planning process called Operational Plan One. One person involved was an executive named Al Lindsay, an Amazonian since 2004, who had been asked to move from his post heading the Prime tech team to help with something totally new. “A low-cost, ubiquitous computer with all its brains in the cloud that you could interact with over voice—you speak to it, it speaks to you,†is how he recalls the vision being described to him.
But building that system—literally an attempt to realize a piece of science fiction, the chatty computer from Star Trek—required a level of artificial intelligence prowess that the company did not have on hand. Worse, of the very few experts who could build such a system, even fewer wanted to work for Amazon. Google and Facebook were snapping up the top talent in the field. “We were the underdog,†Lindsay, who is now a VP, says.
“Amazon had a bit of a bad image, not friendly to people who were research oriented,†says Domingos, the University of Washington professor. The company’s relentless focus on the customer, and its culture of scrappiness, did not jibe with the pace of academia or cushy perks of competitors. “At Google you’re pampered,†Domingos says. “At Amazon you set up your computer from parts in the closet.†Worse, Amazon had a reputation as a place where innovative work was kept under corporate wraps. In 2014, one of the top machine-learning specialists, Yann LeCun, gave a guest lecture to Amazon’s scientists in an internal gathering. Between the time he was invited and the event itself, LeCun accepted a job to lead Facebook’s research effort, but he came anyway. As he describes it now, he gave his talk in an auditorium of about 600 people and then was ushered into a conference room where small groups came in one by one and posed questions to him. But when he asked questions of them, they were unresponsive. This turned off LeCun, who had chosen Facebook in part because it agreed to open-source much of the work of its AI team.
Because Amazon didn’t have the talent in-house, it used its deep pockets to buy companies with expertise. “In the early days of Alexa, we bought many companies,†Limp says. In September 2011, it snapped up Yap, a speech-to-text company with expertise in translating the spoken word into written language. In January 2012, Amazon bought Evi, a Cambridge, UK, AI company whose software could respond to spoken requests like Siri does. And in January 2013, it bought Ivona, a Polish company specializing in text-to-speech, which provided technology that enabled Echo to talk.
But Amazon’s culture of secrecy hampered its efforts to attract top talent from academia. One potential recruit was Alex Smola, a superstar in the field who had worked at Yahoo and Google. “He is literally one of the godfathers of deep learning,†says Matt Wood, the general manager of deep learning and AI at Amazon Web Services. (Google Scholar lists more than 90,000 citations of Smola’s work.) Amazon execs wouldn’t even reveal to him or other candidates what they would be working on. Smola rejected the offer, choosing instead to head a lab at Carnegie Mellon.
“Even until right before we launched there was a headwind,†Lindsay says. “They would say, ‘Why would I want to work at Amazon—I’m not interested in selling people products!’â€
Amazon did have one thing going for it. Since the company works backward from an imagined final product (thus the fanciful press releases), the blueprints can include features that haven’t been invented yet. Such hard problems are irresistible to ambitious scientists. The voice effort in particular demanded a level of conversational AI—nailing the “wake word†(“Hey Alexa!â€), hearing and interpreting commands, delivering non-absurd answers—that did not exist.
That project, even without the specifics on what Amazon was building, helped attract Rohit Prasad, a respected speech-recognition scientist at Boston-based tech contractor Raytheon BBN. (It helped that Amazon let him build a team in his hometown.) He saw Amazon’s lack of expertise as a feature, not a bug. “It was green fields here,†he says. “Google and Microsoft had been working on speech for years. At Amazon we could build from scratch and solve hard problems.†As soon as he joined in 2013, he was sent to the Alexa project. “The device existed in terms of the hardware, but it was very early in speech,†he says.
The trickiest part of the Echo—the problem that forced Amazon to break new ground and in the process lift its machine-learning game in general—was something called far field speech recognition. It involves interpreting voice commands spoken some distance from the microphones, even when they are polluted with ambient noise or other aural detritus. One challenging factor was that the device couldn’t waste any time cogitating about what you said. It had to send the audio to the cloud and produce an answer quickly enough that it felt like a conversation, and not like those awkward moments when you’re not sure if the person you’re talking to is still breathing. Building a machine-learning system that could understand and respond to conversational queries in noisy conditions required massive amounts of data—lots of examples of the kinds of interactions people would have with their Echos. It wasn’t obvious where Amazon might get such data.
Far-field technology had been done before, says Limp, the VP of devices and services. But “it was on the nose cone of Trident submarines, and it cost a billion dollars.†Amazon was trying to implement it in a device that would sit on a kitchen counter, and it had to be cheap enough for consumers to spring for a weird new gadget. “Nine out of 10 people on my team thought it couldn’t be done,†Prasad says. “We had a technology advisory committee of luminaries outside Amazon—we didn’t tell them what we were working on, but they said, ‘Whatever you do, don’t work on far field recognition!’â€
Prasad’s experience gave him confidence that it could be done. But Amazon did not have an industrial-strength system in place for applying machine learning to product development. “We had a few scientists looking at deep learning, but we didn’t have the infrastructure that could make it production-ready,†he says. The good news was that all the pieces were there at Amazon—an unparalleled cloud service, data centers loaded with GPUs to crunch machine-learning algorithms, and engineers who knew how to move data around like fireballs.
His team used those parts to create a platform that was itself a valuable asset, beyond its use in fulfilling the Echo’s mission. “Once we developed Echo as a far-field speech recognition device, we saw the opportunity to do something bigger—we could expand the scope of Alexa to a voice service,†says Alexa senior principal scientist Spyros Matsoukas, who had worked with Prasad at Raytheon BBN. (His work there had included a little-known Darpa project called Hub4, which used broadcast news shows and intercepted phone conversations to advance voice recognition and natural language understanding—great training for the Alexa project.) One immediate way they extended Alexa was to allow third-party developers to create their own voice-technology mini-applications—dubbed “skillsâ€â€”to run on the Echo itself. But that was only the beginning.
By breaking out Alexa beyond the Echo, the company’s AI culture started to coalesce. Teams across the company began to realize that Alexa could be a useful voice service for their pet projects too. “So all that data and technology comes together, even though we are very big on single-threaded ownership,†Prasad says. First other Amazon products began integrating into Alexa: When you speak into your Alexa device you can access Amazon Music, Prime Video, your personal recommendations from the main shopping website, and other services. Then the technology began hopscotching through other Amazon domains. “Once we had the foundational speech capacity, we were able to bring it to non-Alexa products like Fire TV, voice shopping, the Dash wand for Amazon fresh, and, ultimately, AWS,†Lindsay says.
The AI islands within Amazon were drawing closer.
Another pivotal piece of the company’s transformation clicked into place once millions of customers (Amazon won’t say exactly how many) began using the Echo and the family of other Alexa-powered devices. Amazon started amassing a wealth of data—quite possibly the biggest collection of interactions of any conversation-driven device ever. That data became a powerful lure for potential hires. Suddenly, Amazon rocketed up the list of places where those coveted machine-learning experts might want to work. “One of the things that made Alexa so attractive to me is that once you have a device in the market, you have the resource of feedback. Not only the customer feedback, but the actual data that is so fundamental to improving everything—especially the underlying platform,†says Ravi Jain, an Alexa VP of machine learning who joined the company last year.
So as more people used Alexa, Amazon got information that not only made that system perform better but supercharged its own machine-learning tools and platforms—and made the company a hotter destination for machine-learning scientists.
At his next Jeff Bezos review he came armed with an epic six-pager. On one level, it was a blueprint for adding machine-learning services to AWS. But Sivasubramanian saw it as something broader: a grand vision of how AWS could become the throbbing center of machine-learning activity throughout all of techdom.
In a sense, offering machine learning to the tens of thousands of Amazon cloud customers was inevitable. “When we first put together the original business plan for AWS, the mission was to take technology that was only in reach of a small number of well-funded organizations and make it as broadly distributed as possible,†says Wood, the AWS machine-learning manager. “We’ve done that successfully with computing, storage, analytics, and databases—and we’re taking the exact same approach with machine learning.†What made it easier was that the AWS team could draw on the experience that the rest of the company was accumulating.
AWS’s Amazon Machine Learning, first offered in 2015, allows customers like C-Span to set up a private catalog of faces, Wood says. Zillow uses it to estimate house prices. Pinterest employs it for visual search. And several autonomous driving startups are using AWS machine learning to improve products via millions of miles of simulated road testing.
In 2016, AWS released new machine-learning services that more directly drew on the innovations from Alexa—a text-to-speech component called Polly and a natural language processing engine called Lex. These offerings allowed AWS customers, which span from giants like Pinterest and Netflix to tiny startups, to build their own mini Alexas. A third service involving vision, Rekognition, drew on work that had been done in Prime Photos, a relatively obscure group at Amazon that was trying to perform the same deep-learning wizardry found in photo products by Google, Facebook, and Apple.
These machine-learning services are both a powerful revenue generator and key to Amazon’s AI flywheel, as customers as disparate as NASA and the NFL are paying to get their machine learning from Amazon. As companies build their vital machine-learning tools inside AWS, the likelihood that they will move to competing cloud operations becomes ridiculously remote. (Sorry, Google, Microsoft, or IBM.) Consider Infor, a multibillion-dollar company that creates business applications for corporate customers. It recently released an extensive new application called Coleman (named after the NASA mathematician in Hidden Figures) that allows its customers to automate various processes, analyze performance, and interact with data all through a conversational interface. Instead of building its own bot from scratch, it uses AWS’s Lex technology. “Amazon is doing it anyway, so why would we spend time on that? We know our customers and we can make it applicable to them,†says Massimo Capoccia, a senior VP of Infor.
AWS’s dominant role in the ether also gives it a strategic advantage over competitors, notably Google, which had hoped to use its machine-learning leadership to catch up with AWS in cloud computing. Yes, Google may offer customers super-fast, machine-learning-optimized chips on its servers. But companies on AWS can more easily interact with—and sell to—firms that are also on the service. “It’s like Willie Sutton saying he robs banks because that’s where the money is,†says DigitalGlobe CTO Walter Scott about why his firm uses Amazon’s technology. “We use AWS for machine learning because that’s where our customers are.â€
Last November at the AWS re:Invent conference, Amazon unveiled a more comprehensive machine-learning prosthetic for its customers: SageMaker, a sophisticated but super easy-to-use platform. One of its creators is none other than Alex Smola, the machine-learning superstar with 90,000 academic citations who spurned Amazon five years ago. When Smola decided to return to industry, he wanted to help create powerful tools that would make machine learning accessible to everyday software developers. So he went to the place where he felt he’d make the biggest impact. “Amazon was just too good to pass up,†he says. “You can write a paper about something, but if you don’t build it, nobody will use your beautiful algorithm,†he says.
When Smola told Sivasubramanian that building tools to spread machine learning to millions of people was more important than publishing one more paper, he got a nice surprise. “You can publish your paper, too!†Sivasubramanian said. Yes, Amazon is now more liberal in permitting its scientists to publish. “It’s helped quite a bit with recruiting top talent as well as providing visibility of what type of research is happening at Amazon,†says Spyros Matsoukas, who helped set guidelines for a more open stance.
It’s too early to know if the bulk of AWS’s million-plus customers will begin using SageMaker to build machine learning into their products. But every customer that does will find itself heavily invested in Amazon as its machine-learning provider. In addition, the platform is sufficiently sophisticated that even AI groups within Amazon, including the Alexa team, say they intend to become SageMaker customers, using the same toolset offered to outsiders. They believe it will save them a lot of work by setting a foundation for their projects, freeing them to concentrate on the fancier algorithmic tasks.
Even if only some of AWS’s customers use SageMaker, Amazon will find itself with an abundance of data about how its systems perform (excluding, of course, confidential information that customers keep to themselves). Which will lead to better algorithms. And better platforms. And more customers. The flywheel is working overtime.
AI Everywhere
With its machine learning overhaul in place, the company’s AI expertise is now distributed across its many teams—much to the satisfaction of Bezos and his consiglieri. While there is no central office of AI at Amazon, there is a unit dedicated to the spread and support of machine learning, as well as some applied research to push new science into the company’s projects. The Core Machine Learning Group is led by Ralf Herbrich, who worked on the Bing team at Microsoft and then served a year at Facebook, before Amazon lured him in 2012. “It’s important that there’s a place that owns this community†within the company, he says. (Naturally, the mission of the team was outlined in an aspirational six-pager approved by Bezos.)
Part of his duties include nurturing Amazon’s fast-growing machine-learning culture. Because of the company’s customer-centric approach—solving problems rather than doing blue-sky research—Amazon execs do concede that their recruiting efforts will always tilt towards those interested in building things rather than those chasing scientific breakthroughs. Facebook’s LeCun puts it another way: “You can do quite well by not leading the intellectual vanguard.â€
But Amazon is following Facebook and Google’s lead in training its workforce to become adept at AI. It runs internal courses on machine-learning tactics. It hosts a series of talks from its in-house experts. And starting in 2013, the company has hosted an internal machine-learning conference at its headquarters every April, a kind of Amazon-only version of NIPS, the premier academic machine-learning-palooza. “When I started, the Amazon machine-learning conference was just a couple hundred people; now it’s in the thousands,†Herbrich says. “We don’t have the capacity in the largest meeting room in Seattle, so we hold it there and stream it to six other meeting rooms on the campus.†One Amazon exec remarks that if it gets any bigger, instead of calling it an Amazon machine-learning event, it should just be called Amazon.
Herbrich’s group continues to push machine learning into everything the company attempts. For example, the fulfillment teams wanted to better predict which of the eight possible box sizes it should use with a customer order, so they turned to Herbrich’s team for help. “That group doesn’t need its own science team, but it needed these algorithms and needed to be able to use them easily,†he says. In another example, David Limp points to a transformation in how Amazon predicts how many customers might buy a new product. “I’ve been in consumer electronics for 30 years now, and for 25 of those forecasting was done with [human] judgment, a spreadsheet, and some Velcro balls and darts,†he says. “Our error rates are significantly down since we’ve started using machine learning in our forecasts.â€
Still, sometimes Herbrich’s team will apply cutting-edge science to a problem. Amazon Fresh, the company’s grocery delivery service, has been operating for a decade, but it needed a better way to assess the quality of fruits and vegetables—humans were too slow and inconsistent. His Berlin-based team built sensor-laden hardware and new algorithms that compensated for the inability of the system to touch and smell the food. “After three years, we have a prototype phase, where we can judge the quality more reliably†than before, he says.
Of course, such advances can then percolate throughout the Amazon ecosystem. Take Amazon Go, the deep-learning-powered cashier-less grocery store in its headquarters building that recently opened to the public. “As a customer of AWS, we benefit from its scale,†says Dilip Kumar, VP of Technology for Amazon Go. “But AWS is also a beneficiary.†He cites as an example Amazon Go’s unique system of streaming data from hundreds of cameras to track the shopping activities of customers. The innovations his team concocted helped influence an AWS service called Kinesis, which allows customers to stream video from multiple devices to the Amazon cloud, where they can process it, analyze it, and use it to further advance their machine learning efforts.
Even when an Amazon service doesn’t yet use the company’s machine-learning platform, it can be an active participant in the process. Amazon’s Prime Air drone-delivery service, still in the prototype phase, has to build its AI separately because its autonomous drones can’t count on cloud connectivity. But it still benefits hugely from the flywheel, both in drawing on knowledge from the rest of the company and figuring out what tools to use. “We think about this as a menu—everybody is sharing what dishes they have,†says Gur Kimchi, VP of Prime Air. He anticipates that his team will eventually have tasty menu offerings of its own. “The lessons we’re learning and problems we’re solving in Prime Air are definitely of interest to other parts of Amazon,†he says.
In fact, it already seems to be happening. “If somebody’s looking at an image in one part of the company, like Prime Air or Amazon Go, and they learn something and create an algorithm, they talk about it with other people in the company,†says Beth Marcus, a principal scientist at Amazon robotics. “And so someone in my team could use it to, say, figure out what’s in an image of a product moving through the fulfillment center.â€
Is it possible for a company with a product-centered approach to eclipse the efforts of competitors staffed with the superstars of deep learning? Amazon’s making a case for it. “Despite the fact they’re playing catchup, their product releases have been incredibly impressive,†says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence. “They’re a world-class company and they’ve created world-class AI products.â€
The flywheel keeps spinning, and we haven’t seen the impact of a lot of six-pager proposals still in the pipeline. More data. More customers. Better platforms. More talent.
Alexa, how is Amazon doing in AI?
The answer? Jeff Bezos’s braying laugh.
from Wired Top Stories http://ift.tt/2ntt71a
via IFTTT
The algorithm DroNet allows drones to fly completely by themselves through the streets of a city and in indoor environments. It produces two outputs for each single input image: a steering angle to keep the drone navigating while avoiding obstacles, and a collision probability to let the drone recognize dangerous situations and promptly react to them.
from NASA Tech Briefs http://ift.tt/2nvnWxR
via IFTTT