You Can Now Mute Websites Forever in Chrome

You Can Now Mute Websites Forever in Chrome

http://ift.tt/2Ed5ioG

Rest, my child. CNN’s autoplay videos can’t hurt you any more. In the latest public version of Chrome, you can just right-click any tab and select “mute site.”

You can do this even if the site isn’t currently making any noise. So go ahead and pre-emptively silence all the loud sites you want. Lifehacker contributor Tim Donnelly demonstrates:

Chrome let users mute tabs in the past, but it wouldn’t remember which sites a user always wanted to mute. As we reported in December, Chrome introduced autoplay muting in the beta release of Chrome 64. But now the change will roll out to all of us normies who use the normal public releases of Chrome. (If you still want to mute a tab, you can turn that feature back on; scroll halfway down this article on MakeUseOf.)

CNN knows, deep in its heart of hearts, that you just wanted to read a news article, not to watch a video recapping the article. But CNN, like any site, makes far more money from a video ad than from the static ads on a page of text. So they just play the video immediately, without your consent, hoping the ad will play before you finally stop it.

It’s silly what sites will do to force a video ad on you! They might even embed a video in the middle of an article about how to mute videos! What a world!

Tech

via Lifehacker http://lifehacker.com

February 8, 2018 at 02:36PM

Qualcomm will power 5G devices from LG, Sony and more in 2019

Qualcomm will power 5G devices from LG, Sony and more in 2019

http://ift.tt/2nM77yO

Since the first 5G standard was approved two months ago, the industry has been racing to deliver next-generation mobile data to the world. Qualcomm made two announcements today that show us real-world 5G is almost here. First, it revealed a slew of consumer electronics companies that have committed to making 5G-ready mobile devices starting in 2019, using Qualcomm’s X50 5G modem. This list includes LG, Sony Mobile, HTC, ASUS, Xiaomi, ZTE, Netgear and more. Don’t forget, Samsung also announced a partnership with Qualcomm last month to work on 5G technology through the next few years.

But what use are 5G-ready devices if the carriers haven’t rolled out support? Not much. The good news, that Qualcomm also announced today, is that 18 mobile operators around the world will be testing 5G networks on the same X50 modem. US participants include AT&T, Verizon Wireless and Sprint, while major players elsewhere like Vodafone, Telstra, Deutsche Telekom, NTT Docomo and China Mobile are also on board.

There are some notable absentees, of course. Huawei, which has been making its own chipsets, is missing, as is Apple. That’s no surprise, especially given the iPhone maker is currently embroiled in a legal battle with Qualcomm. On the carrier side, T-Mobile isn’t mentioned in today’s announcement, although the Uncarrier has worked with the chip maker in the past on gigabit LTE.

Still, these commitments indicate that, at the very least, the brands revealed today will be racing to deliver 5G-ready devices in 2019. With carriers like AT&T already deploying mobile 5G in test cities, it looks like we don’t have to wait too much longer to get connected to the next generation. But like what happened with 4G LTE, the 5G onslaught may face challenges, so don’t expect the rollout to be widespread and speedy at first.

Source: Qualcomm (OEMs), Qualcomm (carriers)

Tech

via Engadget http://www.engadget.com

February 8, 2018 at 02:48PM

Stupid virus and malwares…

It’s quite annoying… but at least I’m back.

These malwares/bugs are beyond annoying… Guess I will have to stay on top of it more…

 

For those that think I generate all the contents, please do more research.

I’m merely sharing the posts that others have generated.

Nintendo Labo Will Let You Program Your Own Custom Robots

When Nintendo Labo launches this April, it will come with a feature called Toy Con Garage that lets you use rudimentary programming to build and customize your own cardboard robots, Nintendo announced today. Some of the custom toys Nintendo showed off included an electric guitar and a basic game of electronic tennis.

At an event in New York this afternoon, Nintendo representatives demonstrated this Toy Con Garage, which uses simple “building blocks” to let you program your devices. They’re essentially “if-then” statements. When you open up the program, you can select from a number of blocks based on input options for your Switch’s controllers, then connect them to other blocks based on output options. For example, you can connect the left Switch controller’s up button (input) to the right Switch controller’s vibration feature (output), so whenever you press up on the left Joy Con, the right Joy Con will buzz.

This is how Nintendo Labo users will be able to expand beyond the six types of cardboard creations included in the Variety Set or the one included in the Robot Set. Instead of making a piano, you can make a guitar. Instead of making a toy car, you can build a little cardboard man who falls flat on his face. You can mix and match different programs’ functionality—using the fishing rod to play music, for example—and you can even add extra Joy Cons to build even more elaborate programs.

Nintendo would not allow attendants to take pictures or videos of Toy Con Garage, although saw a video at the event that will likely be put on Nintendo’s YouTube channel later. The company did show off a few seconds of this feature during the original Nintendo Labo reveal a few weeks ago:

The building blocks look like that.

We’ll have more on Nintendo Labo, including hands-on impressions and videos, in the very near future. The wild new cardboard toys come out on April 20.

from Kotaku http://ift.tt/2EtzPeV
via IFTTT

It’s Time For a Serious Talk About the Science of Tech “Addiction”

To hear Andrew Przybylski tell it, the American 2016 presidential election is what really inflamed the public’s anxiety over the seductive power of screens. (A suspicion that big companies with opaque inner workings are influencing your thoughts and actions will do that.) “Psychologists and sociologists have obviously been studying and debating about screens and their effects for years,” says Przybylski, who is himself a psychologist at the Oxford Internet Institute with more than a decade’s experience studying the impact of technology. But society’s present conversation—”chatter,” he calls it—can be traced back to three events, beginning with the political race between Hillary Clinton and Donald Trump.

Then there were the books. Well-publicized. Scary-sounding. Several, really, but two in particular. The first, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, by NYU psychologist Adam Alter, was released March 2, 2017. The second, iGen: Why Today’s Super-Connected Kids are Growing Up Less Rebellious, More Tolerant, Less Happy – and Completely Unprepared for Adulthood – and What That Means for the Rest of Us, by San Diego State University psychologist Jean Twenge, hit stores five months later.

Last came the turncoats. Former employees and executives from companies like Facebook worried openly to the media about the monsters they helped create. Tristan Harris, a former product manager at Google and founder of the nonprofit “Time Well Spent” spoke with this publication’s editor in chief about how Apple, Google, Facebook, Snapchat, Twitter, Instagram—you know, everyone—design products to steal our time and attention.

Bring these factors together, and Przybylski says you have all the ingredients necessary for alarmism and moral panic. What you’re missing, he says, is the only thing that matters: direct evidence.

Which even Alter, the author of that first bellwether book, concedes. “There’s far too little evidence for many of the assertions people make,” he says. “I’ve become a lot more careful with what I say, because I felt the evidence was stronger when I first started speaking about it.”

“People are letting themselves get played,” says Przybylski. “It’s a bandwagon.” So I ask him: When WIRED says that technology is hijacking your brain, and the New York Times says it’s time for Apple to design a less addictive iPhone, are we part of the problem? Are we all getting duped?

“Yeah, you are,” he says. You absolutely are.”

Of course, we’ve been here before. Anxieties over technology’s impact on society are as old as society itself; video games, television, radio, the telegraph, even the written word—they were all, at one time, scapegoats or harbingers of humanity’s cognitive, creative, emotional, and cultural dissolution. But the apprehension over smartphones, apps, and seductive algorithms is different. So different, in fact, that our treatment of past technologies fails to be instructive.

A better analogy is our modern love-hate relationship with food. When grappling with the promises and pitfalls of our digital devices, it helps to understand the similarities between our technological diets and our literal ones.

Today’s technology is always with you; a necessary condition, increasingly, of existence itself. These are some of the considerations that led MIT sociologist Sherry Turkle to suggest avoiding the metaphor of addiction, when discussing technology. “To combat addiction, you have to discard the addicting substance,” Turkle wrote in her 2011 book Alone Together: Why We Expect More from Technology and Less from Each Other. “But we are not going to ‘get rid’ of the Internet. We will not go ‘cold turkey’ or forbid cell phones to our children. We are not going to stop the music or go back to the television as the family hearth.”

Food addicts—who speak of having to take the “tiger of addiction” out of the cage for a walk three times a day—might take issue with Turkle’s characterization of dependence. But her observation, and the food addict’s plight, speak volumes about our complicated relationships with our devices and the current state of research.

People from all backgrounds use technology—and no two people use it exactly the same way. “What that means in practice is that it’s really hard to do purely observational research into the effects of something like screen time, or social media use,” says MIT social scientist Dean Eckles, who studies how interactive technologies impact society’s thoughts and behaviors. You can’t just divide participants into, say, those with phones and those without. Instead, researchers have to compare behaviors between participants while accounting for variables like income, race, and parental education.

Say, for example, you’re trying to understand the impact of social media on adolescents, as Jean Twenge, author of the iGen book, has. When Twenge and her colleagues analyzed data from two nationally representative surveys of hundreds of thousands of kids, they calculated that social media exposure could explain 0.36 percent of the covariance for depressive symptoms in girls.

But those results didn’t hold for the boys in the dataset. What’s more, that 0.36 percent means that 99.64 percent of the group’s depressive symptoms had nothing to do with social media use. Przybylski puts it another way: “I have the data set they used open in front of me, and I submit to you that, based on that same data set, eating potatoes has the exact same negative effect on depression. That the negative impact of listening to music is 31 times larger than the effect of social media.”

In datasets as large as these, it’s easy for weak correlational signals to emerge from the noise. And a correlation tells us nothing about whether new-media screen time actually causes sadness or depression. Which are the same problems scientists confront in nutritional research, much of which is based on similarly large, observational work. If a population develops diabetes but surveys show they’re eating sugar, drinking alcohol, sipping out of BPA-laden straws, and consuming calories to excess, which dietary variable is to blame? It could just as easily be none or all of the above.

Decades ago, those kinds of correlational nutrition findings led people to demonize fat, pinning it as the root cause of obesity and chronic illness in the US. Tens of millions of Americans abolished it from their diets. It’s taken a generation for the research to boomerang back and rectify the whole baby-bathwater mistake. We risk similar consequences, as this new era of digital nutrition research gets underway.

Fortunately, lessons learned from the rehabilitation of nutrition research can point a way forward. In 2012, science journalist Gary Taubes and physician-researcher Peter Attia launched a multimillion-dollar undertaking to reinvent the field. They wanted to lay a new epistemological foundation for nutrition research, investing the time and money to conduct trials that could rigorously establish the root causes of obesity and its related diseases. They called their project the Nutrition Science Initiative.

Today, research on the link between technology and wellbeing, attention, and addiction finds itself in need of similar initiatives. They need randomized controlled trials, to establish stronger correlations between the architecture of our interfaces and their impacts; and funding for long-term, rigorously performed research. “What causes what? Is it that screen time leads to unhappiness or unhappiness leads to screen time?” says Twenge. “So that’s where longitudinal studies come in.” Strategies from the nascent Open Science Framework—like the pre-registration of studies, and data sharing—could help, too.

But more than any of that, researchers will need buy-in from the companies that control that data. Ours is a time of intense informational asymmetry; the people best equipped to study what’s happening—the people who very likely are studying what’s happening—are behind closed doors. Achieving balance will require openness and objectivity from those who hold the data; clear-headed analysis from those who study it; and measured consideration by the rest of us.

“Don’t get me wrong, I’m concerned about the effects of technology. That’s why I spend so much of my time trying to do the science well,” Przybylski says. He says he’s working to develop a research proposal strategy by which scientists could apply to conduct specific, carefully designed studies with proprietary data from major platforms. Proposals would be assessed by independent reviewers outside the control of Facebook etc. If the investigation shows the potential to answer an important question in a discipline, or about a platform, the researchers outside the company get paired with the ones inside.

“If it’s team based, collaborative, and transparent, it’s got half a chance in hell of working,” Przybylski says.

And if we can avoid the same mistakes that led us to banish fat from our food, we stand a decent chance of keeping our technological diets balanced and healthy.

Your Technology and You

  • Wired’s editor-in-chief Nick Thompson spoke with Tristan Harris, the prophet behind the “Time Well Spent” movement that argues are minds are being hijacked by the technology we use.

  • One writer takes us through his extreme digital detox, a retreat that took him offline for a whole month.

  • Technology is demonized for making us distractible, but the right tech can help us form new, better digital habits—like these ones.

from Wired Top Stories http://ift.tt/2DTNgHU
via IFTTT

Blockchain: The Complete Guide

Depending on who you ask, blockchains are either the most important technological innovation since the internet or a solution looking for a problem.

The original blockchain is the decentralized ledger behind the digital currency bitcoin. The ledger consists of linked batches of transactions known as blocks (hence the term blockchain), and an identical copy is stored on each of the roughly 200,000 computers that make up the bitcoin network. Each change to the ledger is cryptographically signed to prove that the person transferring virtual coins is the actual owner of those coins. But no one can spend their coins twice, because once a transaction is recorded in the ledger, every node in the network will know about it.

Who paved the way for blockchains?

DigiCash (1989)

DigiCash was founded by David Chaum to create a digital-currency system that enabled users to make untraceable, anonymous transactions. It was perhaps too early for its time. It went bankrupt in 1998, just as ecommerce was finally taking off.

E-Gold (1996)

E-gold was a digital currency backed by real gold. The company was plagued by legal troubles, and its founder Douglas Jackson eventually pled guilty to operating an illegal money-transfer service and conspiracy to commit money laundering.

B-Money and Bit-Gold (1998)

Cryptographers Wei Dai (B-money) and Nick Szabo (Bit-gold) each proposed separate but similar decentralized currency systems with a limited supply of digital money issued to people who devoted computing resources.

Ripple Pay (2004)

Now a cryptocurrency, Ripple started out as a system for exchanging digital IOUs between trusted parties.

Reusable Proofs of Work (RPOW) (2004)

RPOW was a prototype of a system for issuing tokens that could be traded with others in exchange for computing intensive work. It was inspired in part by Bit-gold and created by bitcoin’s second user, Hal Finney.

The idea is to both keep track of how each unit of the virtual currency is spent and prevent unauthorized changes to the ledger. The upshot: No bitcoin user has to trust anyone else, because no one can cheat the system.

Other digital currencies have imitated this basic idea, often trying to solve perceived problems with bitcoin by building new cryptocurrencies on new blockchains. But advocates have seized on the idea of a decentralized, cryptographically secure database for uses beyond currency. Its biggest boosters believe blockchains can not only replace central banks but usher in a new era of online services outside the control of internet giants like Facebook and Google. These new-age apps would be impossible to censor, advocates say, and would be more answerable to users.

Several companies are already taking advantage of the Ethereum platform, initially built for a virtual currency. The startup Storj offers a file-storage service, banking on the idea that distributing files across a decentralized network is safer than putting all your files in one cabinet.

Meanwhile, despite the fact that bitcoin was originally best known for enabling illicit drug sales over the internet, blockchains are finding acceptance in some of the world’s largest companies. Some big financial services companies, including JP Morgan and the Depository Trust & Clearing Corporation, are experimenting with blockchains and blockchain-like technologies to improve the efficiency of trading stocks and other assets. Traders buy and sell stocks rapidly, but the behind-the-scenes process of transferring ownership of those assets can take days. Some technologists believe blockchains could help with that.

There are also potential applications for blockchains in the seemingly boring world of corporate compliance. After all, storing records in an immutable ledger is a pretty good way to assure auditors that those records haven’t been tampered with.

It’s too early to say which experiments will work out or whether the results of successful experiments will resemble the bitcoin blockchain. But the idea of creating tamper-proof databases has captured the attention of everyone from anarchist techies to staid bankers.

The First Blockchain

The original bitcoin software was released to the public in January 2009. It was open source software, meaning anyone could examine the code and reuse it. And many have. At first, blockchain enthusiasts sought to simply improve on bitcoin. Litecoin, another virtual currency based on the bitcoin software, seeks to offer faster transactions.

One of the first projects to repurpose the bitcoin code to use it for more than currency was Namecoin, a system for registering “.bit” domain names. The traditional domain-name management system—the one that helps your computer find our website when you type wired.com—depends on a central database, essentially an address book for the internet. Internet-freedom activists have long worried that this traditional approach makes censorship too easy, because governments can seize a domain name by forcing the company responsible for registering it to change the central database. The US government has done this several times to shut sites accused of violating gambling or intellectual-property laws.

Namecoin tries to solve this problem by storing .bit domain registrations in a blockchain, which theoretically makes it impossible for anyone without the encryption key to change the registration information. To seize a .bit domain name, a government would have to find the person responsible for the site and force them to hand over the key.

What’s an “ICO”?

Ethereum and other blockchain-based projects have raised funds through a controversial practice called an “initial coin offering,” or ICO: The creators of new digital currencies sell a certain amount of the currency, usually before they’ve finished the software and technology that underpins it. The idea is that investors can get in early while giving developers the funds to finish the tech.The catch is that these offerings have traditionally operated outside the regulatory framework meant to protect investors, although that’s starting to change as more governments examine the practice.

Bitcoin’s software wasn’t designed to handle other types of applications. In 2013, a startup called Ethereum published a paper outlining an idea that promised to make it easier for coders to create their own blockchain-based software without having to start from scratch, without relying on the original bitcoin software. In 2015 the company released its platform for building “smart contracts,” software applications that can enforce an agreement without human intervention. For example, you could create a smart contract to bet on tomorrow’s weather. You and your gambling partner would upload the contract to the Ethereum network and then send a little digital currency, which the software would essentially hold in escrow. The next day, the software would check the weather and then send the winner their earnings. At least two major “prediction markets” have been built on the platform, enabling people to bet on more interesting outcomes, such as which political party will win an election.

So long as the software is written correctly, there’s no need to trust anyone in these transactions. But that turns out to be a big catch. In 2016 a hacker made off with about $50 million worth of Ethereum’s custom currency intended for a democratized investment scheme where investors would pool their money and vote on how to invest it. A coding error allowed a still unknown person to make off with the virtual cash. Lesson: It’s hard to remove humans from transactions, with or without a blockchain.

Even as cryptography geeks plotted to use blockchains to topple, or at least bypass, big banks, the financial sector began its own experiments with blockchains. In 2015, some of the largest financial institutions in the world, including JP Morgan, the Bank of England, and the Depository Trust & Clearing Corporation (DTCC), announced that they would collaborate on open source blockchain software under the name Hyperledger. Several pieces of software have been released under the Hyperledger umbrella, including Sawtooth, created by Intel for building custom blockchains.

The industry is already experimenting with using blockchains to make security trades more efficient. Nasdaq OMX, the company behind the Nasdaq stock exchange, began allowing private companies to use blockchains to manage shares in 2015, starting with a company called Chain. Similarly, the Australian Securities Exchange announced a deal to use blockchain technology from a Goldman Sachs-backed startup called Digital Asset Holdings to power the post-trade processes of Australia’s equity market.

The Future of Blockchain

Despite the blockchain hype—and many experiments—there’s still no “killer app” for the technology beyond currency speculation. And while auditors might like the idea of immutable records, as a society we don’t always want records to be permanent.

Blockchain proponents admit that it could take a while for the technology to catch on. After all, the internet’s foundational technologies were created in the 1960s, but it took decades for the internet to become ubiquitous.

That said, the idea could eventually show up in lots of places. For example, your digital identity could be tied to a token on a blockchain. You could then use that token to log in to apps, open bank accounts, apply for jobs, or prove that your emails or social-media messages are really from you. Future social networks might be built on connected smart contracts that show your posts only to certain people or enable people who create popular content to be paid in cryptocurrencies. Perhaps the most radical idea is using blockchains to handle voting. The team behind the open source project Soverign built a platform that organizations, companies, and even governments can already use to gather votes on a blockchain.

Advocates believe blockchains can help automate many tasks now handled by lawyers or other professionals. For example, your will might be stored in a blockchain. Or perhaps your will could be a smart contract that will automatically dole out your money to your heirs. Or maybe blockchains will replace notaries.

It’s also entirely possible that blockchains will evolve into something completely different. Many of the financial industry’s experiments involve “private” blockchains that run on servers within a single company and selected partners. In contrast, anyone can run bitcoin or Ethereum software on their computer and view all of the transactions recorded on the networks’ respective blockchains. But big companies prefer to keep their data in the hands of a few employees, partners, and perhaps regulators.

Bitcoin proved that it’s possible to build an online service that operates outside the control of any one company or organization. The task for blockchain advocates now is proving that that’s actually a good thing.

Learn More

This guide was last updated on January 31, 2018.

from Wired Top Stories http://ift.tt/2BJC2zK
via IFTTT

Artificial Intelligence: The Complete Guide

Artificial intelligence is overhyped—there, we said it. It’s also incredibly important.

Superintelligent algorithms aren’t about to take all the jobs or wipe out humanity. But software has gotten significantly smarter of late. It’s why you can talk to your friends as an animated poop on the iPhone X using Apple’s Animoji, or ask your smart speaker to order more paper towels.

Tech companies’ heavy investments in AI are already changing our lives and gadgets, and laying the groundwork for a more AI-centric future.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than by relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

For most of us, the most obvious results of the improved powers of AI are neat new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is also poised to reinvent other areas of life. One is health care. Hospitals in India are testing software that checks images of a person’s retina for signs of diabetic retinopathy, a condition frequently diagnosed too late to prevent vision loss. Machine learning is vital to projects in autonomous driving, where it allows a vehicle to make sense of its surroundings.

There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”

Moments that Shaped AI

1956

The Dartmouth Summer Research Project on Artificial Intelligence coins the name of a new field concerned with making software smart like humans.

1965

Joseph Weizenbaum at MIT creates Eliza, the first chatbot, which poses as a psychotherapist.

1975

Meta-Dendral, a program developed at Stanford to interpret chemical analyses, makes the first discoveries by a computer to be published in a refereed journal.

1987

A Mercedes van fitted with two cameras and a bunch of computers drives itself 20 kilometers along a German highway at more than 55 mph, in an academic project led by engineer Ernst Dickmanns.

1997

IBM’s computer Deep Blue defeats chess world champion Garry Kasparov.

2004

The Pentagon stages the Darpa Grand Challenge, a race for robot cars in the Mojave Desert that catalyzes the autonomous-car industry.

2012

Researchers in a niche field called deep learning spur new corporate interest in AI by showing their ideas can make speech and image recognition much more accurate.

2016

AlphaGo, created by Google unit DeepMind, defeats a world champion player of the board game Go.

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book co-authored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.

Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.

In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to publish results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find.

The Future of Artificial Intelligence

Even if progress on making artificial intelligence smarter stops tomorrow, don’t expect to stop hearing about how it’s changing the world.

Big tech companies such as Google, Microsoft, and Amazon have amassed strong rosters of AI talent and impressive arrays of computers to bolster their core businesses of targeting ads or anticipating your next purchase.

They’ve also begun trying to make money by inviting others to run AI projects on their networks, which will help propel advances in areas such as health care or national security. Improvements to AI hardware, growth in training courses in machine learning, and open source machine-learning projects will also accelerate the spread of AI into other industries.

Your AI Decoder Ring

Artificial intelligence

The development of computers capable of tasks that typically require human intelligence.

Machine learning

Using example data or experience to refine how computers make predictions or perform a task.

Deep learning

A machine learning technique in which data is filtered through self-adjusting networks of math loosely inspired by neurons in the brain.

Supervised learning

Showing software labeled example data, such as photographs, to teach a computer what to do.

Unsupervised learning

Learning without annotated examples, just from experience of data or the world—trivial for humans but not generally practical for machines. Yet.

Reinforcement learning

Software that experiments with different actions to figure out how to maximize a virtual reward, such as scoring points in a game.

Artificial general intelligence

As yet nonexistent software that displays a humanlike ability to adapt to different environments and tasks, and transfer knowledge between them.

Meanwhile, consumers can expect to be pitched more gadgets and services with AI-powered features. Google and Amazon in particular are betting that improvements in machine learning will make their virtual assistants and smart speakers more powerful. Amazon, for example, has devices with cameras to look at their owners and the world around them.

The commercial possibilities make this a great time to be an AI researcher. Labs investigating how to make smarter machines are more numerous and better-funded than ever. And there’s plenty to work on: Despite the flurry of recent progress in AI and wild prognostications about its near future, there are still many things that machines can’t do, such as understanding the nuances of language, common-sense reasoning, and learning a new skill from just one or two examples. AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans. One deep-learning pioneer, Google’s Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.

As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Facebook have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or black people. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on the safety and ethics of AI. For us to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.

Learn More

  • What The AI Behind AlphaGo Can Teach Us About Being Human
    Drama, emotion, server racks, and existential questions. Find them all in our on-the-scene account from the triumph of Google’s Go-playing bot over top player Lee Sedol in South Korea.

  • John McCarthy, Father Of AI And Lisp, Dies At 84
    WIRED’s 2011 obituary of the man who coined the term artificial intelligence gives a sense of the origins of the field. McCarthy’s lasting, and unfulfilled, dream of making machines as smart as humans still entrances many people working on AI today.

  • Are We Ready for Intimacy With Androids?
    People have always put themselves into their technological creations—but what happens when those artificial creations look and act just like people? Hiroshi Ishi­guro builds androids on a quest to reverse engineer how humans form relationships. His progress may provide a preview of issues we’ll encounter as AI and robotics evolve.

  • When it Comes to Gorillas, Google Photos Remains Blind
    The limitations of AI systems can be as important as their capabilities. Despite improvements in image recognition over recent years, WIRED found Google still doesn’t trust its algorithms not to mix up apes and black people.

  • Why Artificial Intelligence is Not Like Your Brain—Yet
    You might hear companies, marketers, or drinking companions say AI algorithms work like the brain. They’re wrong, and here’s why.

  • Artificial Intelligence Seeks an Ethical Conscience
    As companies and governments rush to embrace ever-more powerful AI, researchers have begun to ponder ethical and moral questions about the systems they build, and how they’re put to use.

  • A ‘Neurographer’ Puts The Art In Artificial Intelligence
    Some artists are repurposing the AI techniques tech companies use to process images into a new creative tool. Mario Klingemann’s haunting images, for example, have been compared to the paintings of Francis Bacon.

from Wired Top Stories http://ift.tt/2BKHVwC
via IFTTT