How the “Gangnam Style” Video Became a Global Pandemic

The spread of disease has always followed a clear pattern. The outbreak begins at a specific place and time and then spreads in a wavelike pattern away from the source.

The speed of this wave is governed by the methods of travel. Historical records show that the Black Death traveled across Europe at about two kilometers a day. This spread of this particularly virulent form of bubonic plague that killed between 35 and 200 million in the 14th century was limited by the transport that was then available.

But something strange happened in the 20th century, when that wavelike form of spreading seemingly vanished. Air travel suddenly allowed diseases to jump from one continent to another at breakneck speed. However, network theorists found they could restore the wavelike nature of the spread if they account for the speed of travel. By normalizing the spread in this way, a wavelike spreading pattern reemerges.

Social phenomenon, such as songs, tweets, videos, and so on, are thought to spread in a similar way. And because this happens from person to person through a social network, it should follow a wavelike spreading pattern.

But observing this pattern is hard because the geographical spread of information is distorted by the social media networks along which it moves. And that raises the question of whether it is really wavelike or fundamentally different.

To solve this conundrum, network scientists would dearly love to have an emblematic example of the way a specific piece of information has spread across the globe in a measurable way.

Today, Zsofia Kallus and pals at Eotvos University in Budapest, Hungary, say they have found just such an example in the way that Psy’s Gangnam Style video spread across the globe in 2012, eventually becoming the first to receive over a billion views on YouTube. And the team say it is possible to recover the unique wavelike signature of information spreading, providing they properly take account of the social networks involved.

The story behind this video pandemic is extraordinary. This music video was produced in a style known as k-pop by a South Korean musician called Psy, who was relatively unknown outside his home country.  It was released on July 15, 2012, and immediately become popular in South Korea.

But since Psy was unknown outside the country, the video’s later success was hard to predict. By December 21, 2012, however, the video had become the most viewed in history when it reached a billion views of YouTube across the globe. “In 2012, the record breaking ‘Gangnam Style’ marked the appearance of a new type of online meme, reaching unprecedented level of fame despite its originally small local audience,” say Kallus and co.

Just how this happened is the focus of Kallus and co’s work. To do this, they tracked the spread of the video by searching the historical Twitter stream for geolocated tweets that mention “Gangnam Style.” “Location information allows us to record the approximate arrival time of a certain news to a specific geo-political region,” say Kallus and co.

That reveals the way the video spread, initially to the Philippines and from there to the rest of the world. That’s probably because the Philippines is relatively close to South Korea but has stronger links to the rest of the world through its diaspora. It also has stronger English language links.

But none of that reveals the classic wavelike pattern that epidemiologists expect when viral events occur. Indeed, the spread of the video when plotted against geographic distance from South Korea looks more or less random.

That’s because geographic distance is not the key factor in the spread of information over social networks. That depends instead on the strength of links from one area to another—places that have lots of social ties are likely to receive information more quickly than those that have weak ties.

And indeed, that’s exactly what Kallus and co find. And that points to a way of replacing the geographic distance with an effective distance that captures the speed at which information can spread between them. Once Kallus and co do that—replace the geographic distance with this effective distance—the expected wavelike pattern emerged.

They are even able to cross check this pattern by searching Google Trends for the phrase “Gangnam Style” to see when people first searched for it in different parts of the world. Sure enough, the Google Trends results exactly match those from Twitter.

That’s interesting work that shows how the spread of modern memes occurs in just the same way as ancient diseases. So the “Gangnam Style” video pandemic spread in exactly the same way as bubonic plague!

That’s not really a surprise. But it does confirm the extraordinarily deep link between the physical world and the world of pure information. Just why these seemingly different things—matter and information—share these similar behaviors is not clearly understood. But it does provide ample reason for further investigation.

Ref: http://ift.tt/2v89sX4: Video Pandemics: Worldwide Viral Spreading of Psy’s Gangnam Style Video

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2gZN6DU
via IFTTT

Crucial Steps Ahead for Flying Cars

Flying cars are up against a wall — literally. Turning aircraft into street-safe machines requires manufacturers to prove their safety standards in crash tests. So at least one expensive prototype needs to get smashed to smithereens, while its dummy passengers survive. This is no small financial hurdle, and for a decade the industry has been just a few years away from getting models street-certified.
Flying Cars, or Driveable Planes?
Farthest along, perhaps, are the MIT-graduate founders o

from Discover Main Feed http://ift.tt/2uWuRVm
via IFTTT

So long, Flash: Adobe will kill plug-in by 2020

Adobe is finally pulling the plug on Flash.

The software company on Tuesday said it plans to stop updating and distributing its Flash Media Player by the end of 2020.

The plug-in was a pioneer in the early days of the Internet, allowing users to view rich content like videos, games and other media. Flash used to be the standard way YouTube played its videos.

But the software has been plagued with bugs and security vulnerabilities in recent years. Modern browsers support open web standards like HTML5, allowing developers to embed content directly onto webpages. This has made add-on extensions like Flash mostly useless.

“For 20 years, Flash has helped shape the way that you play games, watch videos and run applications on the web. But over the last few years, Flash has become less common,” Anthony Laforge, product manager for Google Chrome, said in a blog post reacting to the news.

Three years ago, 80% of desktop Chrome users went to a website with Flash every day. Now, that number has fallen to 17% and continues to drop, according to Google.

Related: Microsoft Paint gets second chance at life

Adobe (ADBE) said it will work with major tech giants like Apple (AAPL, Tech30), Facebook (FB, Tech30), Google (GOOG), Microsoft (MSFT, Tech30) and Mozilla to discontinue Flash.

In response to the announcement, Microsoft said it would phase out Flash from its Microsoft Edge and Internet Explorer browsers. Mozilla said Flash will be disabled by default for most users in 2019.

The end of Flash was a long time coming as websites have moved away from using the plug-in. In 2011, Adobe said it would no longer develop the software on mobile devices.

In 2010, Apple cofounder Steve Jobs famously wrote a scathing letter about Flash, saying iPhone and iPads would never support the software.

The news follows Microsoft’s announcement that its iconic Paint software was on its list of “deprecated” features and could be removed from future Windows updates.

from Business and financial news – CNNMoney.com http://ift.tt/2v5lELc
via IFTTT

The Rise of AI Is Forcing Google and Microsoft to Become Chipmakers

By now our future is clear: We are to be cared for, entertained, and monetized by artificial intelligence. Existing industries like healthcare and manufacturing will become much more efficient; new ones like augmented reality goggles and robot taxis will become possible.

But as the tech industry busies itself with building out this brave new artificially intelligent, and profit boosting, world, it’s hitting a speed bump: Computers aren’t powerful and efficient enough at the specific kind of math needed. While most attention to the AI boom is understandably focused on the latest exploits of algorithms beating humans at poker or piloting juggernauts, there’s a less obvious scramble going on to build a new breed of computer chip needed to power our AI future.

One datapoint that shows how great that need is: software companies Google and Microsoft have become entangled in the messy task of creating their own chips. They’re being raced by a new crop of startups peddling their own AI-centric silicon—and probably Apple, too. As well as transforming our lives with intelligent machines, the contest could shake up the established chip industry.

Microsoft revealed its AI chip-making project late on Sunday. At a computer vision conference in Hawaii, Harry Shum, who leads Microsoft’s research efforts, showed off a new chip created for the HoloLens augmented reality googles. The chip, which Shum demonstrated tracking hand movements, includes a module custom-designed to efficiently run the deep learning software behind recent strides in speech and image recognition. Microsoft wants you to be able to smoothly reach out and interact with the virtual objects overlaid on your vision and says nothing on the market could run machine learning software efficiently enough for the battery-powered device that sits on your head.

Microsoft’s project comes in the wake of Google’s own deep learning chip, announced in 2016. The TPU, for tensor processing unit, was created to make deep learning more efficient inside the company’s cloud. The company told WIRED earlier this year that it saved the company from building 15 new datacenters as demand for speech recognition soared. In May Google announced it had made a more powerful version of its TPU and that it would be renting out access to the chips to customers of its cloud computing business.

News that Microsoft has built a deep learning processor for Hololens suggests Redmond wouldn’t need to start from scratch to prep its own server chip to compete with Google’s TPUs. Microsoft has spent several years making its cloud more efficient at deep learning using so-called field-programmable gate arrays, a kind of chip that can be reconfigured after it’s manufactured to make a particular piece of software or algorithm run faster. It plans to offer those to cloud customers next year. But when asked recently if Microsoft would make a custom server chip like Google’s, Doug Burger, the technical mastermind behind Microsoft’s roll out of FPGAs, said he wouldn’t rule it out. Pieces of the design and supply chain process used for the HoloLens deep learning chip could be repurposed for a server chip.

Google and Microsoft’s projects are the most visible part of a new AI-chip industry springing up to challenge established semiconductor giants such as Intel and Nvidia. Apple has for several years designed the processors for its mobile devices, and is widely believed to be working on creating a new chip to make future iPhones better at artificial intelligence. Numerous startups are working on deep learning chips of their own, including Groq, founded by ex-Google engineers who worked on the TPU. “Companies like Intel and Nvidia have been trying to keep on selling what they were already selling,” says Linley Gwennap, founder of semiconductor industry analysts the Linley Group. “We’ve seen these leading cloud companies and startups moving more quickly because they can see the need in their own data centers and the wider market.”

Graphics chip maker Nvidia has seen sales and profits soar in recent years because its chips are better suited than conventional processors to training deep learning software. But the company has mostly chosen to modify and extend its existing chip designs rather than making something tightly specialized to deep learning from scratch, Gwennap says.

You can expect the established chip companies to fight back. Intel, the world’s largest chipmaker, bought an AI chip startup called Nervana last summer and is working on a dedicated deep learning chip built on the company’s technology. The company has the most sophisticated and expensive chip manufacturing operation on the planet. But representatives of the large and small upstarts taking on the chip industry say they have critical advantages. One is that they don’t have to make something that fits within an existing ecosystem of chips and software originally developed for something else.

“We’ve got a simpler task because we’re trying to do one thing and can build things from the ground up,” says Nigel Toon, CEO and co-founder of Graphcore, a UK startup working on a chip for artificial intelligence. Last week the company disclosed $30 million of new funding, including funds from Demis Hassabis, the CEO of Google’s DeepMind AI research division. Also in on the funding round: several leaders from OpenAI, the research institute co-founded by Elon Musk.

At the other end of the scale, the big cloud companies can exploit their considerable experience in running and inventing machine learning services and techniques. “One of the things we really benefited from at Google was we could work directly with the application developers in, say, speech recognition and Street View,” says Norm Jouppi, the engineer who leads Google’s TPU project. “When you’re focused on a few customers and working hand in hand with them it really shortens the turnaround time to build something.”

Google and Microsoft built themselves up by inventing software that did new things with chips designed and built by others. As more is staked on AI, the silicon substrate of the tech industry is changing—and so is where it comes from.

from Wired Top Stories http://ift.tt/2gZsP1n
via IFTTT

MIT-developed plugin makes CAD changes ‘instant’

A new computer-aided-design (CAD) plug-in could drastically improve products you use on a daily basis. Researchers from MIT and Columbia University say the tool will allow engineers to develop prototypes in real-time. They claim its ease of use will have an immediate impact on objects with complex designs, such as cars, planes, and robots.

Many of the products you use are developed using computer-aided-design systems. However, the laborious nature of those same systems can also make them a hindrance during the design process. They can prove particularly time-consuming for engineers developing intricate products (like cars), which undergo a range of modifications.

According to its creators, the new InstantCAD plug-in can cut days (and even weeks) from the development period. This is mainly down to its use of a custom algorithm that provides instant feedback on how to improve an item’s design. For example, if you were building a drone, it could tell you how to make it as lightweight as possible while still being able to carry your desired weight.

"From more ergonomic desks to higher-performance cars, this is really about creating better products in less time," said lead researcher Adriana Schulz. "We think this could be a real game changer for automakers and other companies that want to be able to test and improve complex designs in a matter of seconds."

InstantCad forms part of a paper that will be presented at this month’s SIGGRAPH computer graphics conference in Los Angeles. Its authors claim the optmization of tricky CAD systems is critical in a world where 3D printing and robotics are becoming more accessible.

Source: MIT News

from Engadget http://ift.tt/2tGcFzU
via IFTTT