The World’s Most Powerful Supercomputer Is an Absolute Beast

The World’s Most Powerful Supercomputer Is an Absolute Beast

https://ift.tt/2HwP8nb

A row of Summit’s server racks.
Photo: Oak Ridge National Laboratory

Behold Summit, a new supercomputer capable of making 200 million billion calculations per second. It marks the first time in five years that a machine from the United States has been ranked as the world’s most powerful.

The specs for this $200 million machine defy comprehension. Built by IBM and Nvidia for the US Department of Energy’s Oak Ridge National Laboratory, Summit is a 200 petaflop machine, meaning it can perform 20 quadrillion calculations per second. That’s about a million times faster than a typical laptop computer. As the the New York Times put it, a human would require 63 billion years to do what Summit can do in a single second. Or as stated by MIT Technology Review, “everyone on Earth would have to do a calculation every second of every day for 305 days to crunch what the new machine can do in the blink of an eye.”

The machine, with its 4,608 servers, 9,216 central processing chips, and 27,648 graphics processors, weighs 340 tons. The system is housed in a 9,250 square-foot room at Oak Ridge National Laboratory’s facility in Tennessee. To keep this machine cool, 4,000 gallons of water are pumped through the system. The 13 megawatts of energy required to power this behemoth could light up over 8,000 US homes.

Summit is now the world’s most powerful supercomputer, and it is 60 percent faster than the previous title holder, China’s Sunway TaihuLight. It’s the first time since 2013 that a US-built computer has held the title, showing the US is keeping up with its main rival in this area, China. Summit is eight times more powerful that Titan, America’s other top-ranked system.

Photo: Oak Ridge National Laboratory

As MIT Technology Review explains, Summit is the first supercomputer specifically designed to handle AI-specific applications, such as machine learning and neural networks. Its thousands of AI-optimized chips, produced by Nvidia and IBM, allow the machine to crunch through hideous amounts of data in search of patterns imperceptible to humans. As noted in an Energy.gov release, “Summit will enable scientific discoveries that were previously impractical or impossible.”

Summit and machines like it can be used for all sorts of processor-heavy applications, such as designing new aircraft, climate modeling, simulating nuclear explosions, creating new materials, and finding causes of disease. Indeed, its potential to help with drug discovery is huge; Summit, for example, could be used to hunt for relationships between millions of genes and cancer. It could also help with precision medicine, in which drugs and treatments are tailored to individual patients.

From here, we can look forward to the next generation of computers, so-called “exascale” computers capable of executing a billion billion (or one quintillion) calculations per second. And we may not have to wait long: The first exascale computers may arrive by the early 2020s.

[Energy.gov, New York Times, MIT Technology Review]

Tech

via Gizmodo http://gizmodo.com

June 8, 2018 at 03:57PM

The World’s Most Powerful Supercomputer Is an Absolute Beast

The World’s Most Powerful Supercomputer Is an Absolute Beast

https://ift.tt/2HwP8nb

A row of Summit’s server racks.
Photo: Oak Ridge National Laboratory

Behold Summit, a new supercomputer capable of making 200 million billion calculations per second. It marks the first time in five years that a machine from the United States has been ranked as the world’s most powerful.

The specs for this $200 million machine defy comprehension. Built by IBM and Nvidia for the US Department of Energy’s Oak Ridge National Laboratory, Summit is a 200 petaflop machine, meaning it can perform 20 quadrillion calculations per second. That’s about a million times faster than a typical laptop computer. As the the New York Times put it, a human would require 63 billion years to do what Summit can do in a single second. Or as stated by MIT Technology Review, “everyone on Earth would have to do a calculation every second of every day for 305 days to crunch what the new machine can do in the blink of an eye.”

The machine, with its 4,608 servers, 9,216 central processing chips, and 27,648 graphics processors, weighs 340 tons. The system is housed in a 9,250 square-foot room at Oak Ridge National Laboratory’s facility in Tennessee. To keep this machine cool, 4,000 gallons of water are pumped through the system. The 13 megawatts of energy required to power this behemoth could light up over 8,000 US homes.

Summit is now the world’s most powerful supercomputer, and it is 60 percent faster than the previous title holder, China’s Sunway TaihuLight. It’s the first time since 2013 that a US-built computer has held the title, showing the US is keeping up with its main rival in this area, China. Summit is eight times more powerful that Titan, America’s other top-ranked system.

Photo: Oak Ridge National Laboratory

As MIT Technology Review explains, Summit is the first supercomputer specifically designed to handle AI-specific applications, such as machine learning and neural networks. Its thousands of AI-optimized chips, produced by Nvidia and IBM, allow the machine to crunch through hideous amounts of data in search of patterns imperceptible to humans. As noted in an Energy.gov release, “Summit will enable scientific discoveries that were previously impractical or impossible.”

Summit and machines like it can be used for all sorts of processor-heavy applications, such as designing new aircraft, climate modeling, simulating nuclear explosions, creating new materials, and finding causes of disease. Indeed, its potential to help with drug discovery is huge; Summit, for example, could be used to hunt for relationships between millions of genes and cancer. It could also help with precision medicine, in which drugs and treatments are tailored to individual patients.

From here, we can look forward to the next generation of computers, so-called “exascale” computers capable of executing a billion billion (or one quintillion) calculations per second. And we may not have to wait long: The first exascale computers may arrive by the early 2020s.

[Energy.gov, New York Times, MIT Technology Review]

Tech

via Gizmodo http://gizmodo.com

June 8, 2018 at 03:57PM

Surprise, Facebook Reportedly Gave Companies Your Friends’ Data After It Said It Wouldn’t

Surprise, Facebook Reportedly Gave Companies Your Friends’ Data After It Said It Wouldn’t

https://ift.tt/2JvqFjR

Stop me if you’ve heard this one before, but it looks like Facebook may have been sharing more of your data than you thought it was. The Wall Street Journal reported Friday that the social network cut deals with a number of companies to provide access to user records and friend data even after its policy change that prevented apps from scraping that very information.

According to the report, Facebook reached agreements with a number of major corporations to provide data about the friends of its users. The information handed over to companies included details like phone numbers and a metric called “friend link” that determined how much communication and connectivity there was between users and their friends.

Disclosure of the deals punctures a hole in the picture Facebook has tried to paint as a suddenly user-friendly, privacy-minded company after 2014—not that anyone was buying that image anyway. That year, Facebook cut off the significant amount of access app developers could pull from users on the platform, including completely restricting the ability to scrape data from a user’s friends without their consent.

Prior to 2014, Facebook allowed app makers to suck up a significant amount of data from people without their permission—a policy the company has since had to pay for while facing scrutiny over the Cambridge Analytica scandal. The UK-based political data firm acquired information on more than 87 million people collected without consent through a Facebook app.

Facebook supposedly realized the error of its ways and turned off the spigot that allowed friend data to be collected—but as it turns out you could just buy your way back in if you really wanted. Citing “court documents, company officials and people familiar with the matter,” the Wall Street Journal reported that Facebook made deals with Royal Bank of Canada, Nissan—both advertisers on the platform—and Nuance Communications, which was working with Fiat at the time. The agreements were also completely separate from Facebook’s data sharing program with device manufacturers, which the company owned up to earlier this week.

Facebook has copped to the agreements, which it said lasted between just weeks to several months, but tried to minimize them as much as possible. “As we were winding down over the year, there was a small number of companies that asked for short-term extensions, and that, we worked through with them,” Ime Archibong, Facebook’s vice president of product partnerships told WSJ. “But other than that, things were shut down.”

Per the Journal, Facebook internally called the deals “whitelists,” which may be a little bit of insight into how the company viewed the arrangements. Whitelisting someone typically means providing them access that they otherwise wouldn’t have. At the time, Facebook had cut off access to friend data that was previously accessible to app developers using Facebook. These companies were effectively whitelisted to use that data. Access is never truly cut off if you have some money to throw around.

[Wall Street Journal]

Tech

via Gizmodo http://gizmodo.com

June 8, 2018 at 09:27PM

A closer look at RED’s audacious Hydrogen One phone

A closer look at RED’s audacious Hydrogen One phone

https://ift.tt/2ss6Inz

In a van sitting between the high school from Pretty Little Liars and the Stars Hollow gazebo from Gilmore Girls, RED founder Jim Jannard takes out his smartphone — the Hydrogen One — and starts whipping through demos with me. We’re at AT&T’s Shape entertainment conference at Warner Brothers Studios and this might be the most surreal hands-on experience I’ve ever had with a phone. Then again, this might be the most surreal smartphone I’ve ever used.

Companies have tried building modular smartphones, and have met varying degrees of success — the LG G5 and its “friends” utterly flopped while Motorola continues to push its various Mods. Companies also have tried to build smartphones with eye-popping 3D displays, and they’ve been abject failures. Remember Amazon’s Fire Phone? No one has tried to squeeze both of those gimmicks into a single smartphone except for RED, a company that has only ever made cinema-grade digital cameras. A healthy dose of skepticism about all this isn’t just helpful — it’s required. Fortunately, Jannard isn’t phased by the skepticism. He speaks with the surety of a man with little to lose.

That’s because he doesn’t seem stressed about what will happen when the Hydrogen launches on AT&T and Verizon this August. That’s not because he’s sure it’ll be a massive commercial success, either. It’s because he built the phone of his dreams.

“This is the phone I wanted,” he told me. “If we don’t sell one, I have the world’s most expensive phone but I’m completely happy and satisfied with that.” To underscore his point, he puts things a little more bluntly later in our conversation.

“I’ve got the coolest fucking phone in the world in my pocket,” he said. “And I paid for it.”

Gallery: Red Hydrogen One hands-on | 12 Photos

The first thing you’ll notice about the Hydrogen One is its look — I’d call its style “badass-utilitarian.” While other companies have sought to make stylish phones with glass bodies, the Hydrogen’s aluminum (or titanium) frame features grippy, scalloped edges and a patch of what looks like carbon fiber around its dual camera. It’s a handful, certainly, but it’s not nearly as heavy or as dense as I would’ve expected.

Designs are meant to give you a sense of a product’s character and in this case, that character is very clear: the Hydrogen One is a tool, not a toy. Meanwhile, you’ll find the usual smartphone flourishes in the usual places. There’s a set of volume keys on the left side, a power button and shutter button on the right, and a slot for microSD cards and the SIM up top next to the headphone jack. To get a real sense of why the Hydrogen One is special, you need to see its front and back.

I wish I could show you the Hydrogen One’s face, but I can’t — RED won’t let people take photos or video of the phone’s front since 2D media wouldn’t do justice to the “holographic” display. That’s unfortunate for you, because it seems like RED is really onto something here. When you’re peeking at the homescreen or swiping through your apps in the launcher, the 5.7-inch Quad HD display looks like any other. Fire up some compatible content, however, and the screen springs to life.

Still photos of flowers and fire hydrants Jannard shot with the dual camera seemed to leap off the display, and watching clips from movies like Brave made me feel like the films were unfolding around me. I paused the highlight reel a few times to get a better look, and while you won’t be able to see things that weren’t already there, the added depth gave scenes a sense of realness and presence I’ve never experienced on a smartphone. The so-called 4-View (or 4V) effect is strongest when you’re looking at the screen dead-on, but you’ll still get a sense of it when you peer at the screen from an angle. More importantly, the 3D effect seemed to persist as I moved my head around — an impressive feat when you remember that lenticular 3D looks jumpy and jarring when you switch between different perspectives.

These 4V visuals don’t just apply to videos, either. Jannard showed me a recorded demo of a first-person shooter game that looked a lot like Afterpulse, and when the player lined up the reticle to pop an enemy in the head, the barrel of the scope seemed to zoom toward me. It’s unclear what kind of work developers would have to do to optimize games for the Hydrogen One, but that’s arguably overshadowed by a bigger question: would they even bother to do so for a single phone? The details are still murky. Maybe the most novel demo I tried was a 4V-enabled video chat app, in which I could see my own face — captured by multiple front-facing cameras with the same in-your-face depth as those movie clips. It wasn’t just cool; it was utterly transfixing.

I’m told the heart of the experience is a layer of special material beneath the display capable of bouncing light in more than two directions (sort of like this crazy projector screen I saw at CES) to provide a more pronounced sense of depth. Meanwhile, software running directly on the phone’s Snapdragon 835 is used to effectively fill in the gap between the two perspectives found in traditional 3D content — it’s all happening on the fly and in real-time. This results in the most immersive visuals I’ve ever encountered on a phone. No wonder RED wants you to actually see the Hydrogen One before you draw your conclusions about its screen: words and photos don’t do it justice.

Chris Velazco/Engadget

You can capture and view your own photos in videos in 4V, but that’ll only remain interesting for so long. Given RED’s history in Hollywood, it’s no surprise to hear that the company is aggressively pursuing deals with film studios to get their content libraries up and running on the Hydrogen One. That’s where the Hydrogen Network comes in. A number of potential partnerships are still being worked out, but Jannard confirmed that Lionsgate is on board and will bring its entire 3D library to the Hydrogen One. The process of converting existing 3D content to 4V is apparently quite simple, and if studios made the “maximum amount of tweaks and adjustments,” it would take about 3 hours to make a 4V file out of a 1.5-hour film.

The other way the Hydrogen One stands out is its modular backside: you’ll be able to swap different components onto the phone, like a cinema-grade camera that you can hook existing lenses up to. As mentioned, this isn’t a new idea, but since the Hydrogen One is arguably geared toward people who are used to dropping big bucks on camera gear, it seems like a safer (and more lucrative) approach than I’ve seen from other companies. As far as Jannard is concerned, the limited success achieved by other modular smartphones doesn’t mean the concept itself is flawed — it means that the modules those companies have made aren’t meaningful enough. Jannard wouldn’t elaborate on what other kinds of modules the company plans to build, but he did note that RED is open to working with outside partners to build additional hardware for the Hydrogen platform.

“If there are companies that can add value and we don’t have to do it, we’ll absolutely embrace that,” he said.

RED is being very picky about who it works with to build Hydrogen add-ons, mostly because it wants to keep “crap modules” from being attached to the phone. That said, the company is already making some progress — Jannard confirmed that RED is talking to one potential partner about developing a module, and he thinks it’s “likely to happen.”

A patent filed in 2015 reveals the company’s elaborate mobile vision.

Of all the questions that surround the Hydrogen One, one looms larger than the rest: Why build a smartphone like this? Even mobile incumbents have trouble navigating the market, after all. The answer is a complicated one, and it stems from deeper in the past than you might expect. Before creating RED in the mid-2000s, Jannard was best known as the founder and occasional CEO of Oakley, a company that had spent decades crafting sunglasses, goggles and accessories for active, outdoorsy types. His time at Oakley’s helm gave Jannard a deep appreciation for the process of creating products for regular people, and that hasn’t changed despite a long tenure at his pro-oriented camera company.

“I’m a consumer product guy,” he told me. “When I started RED, I always had the idea that at some point we’d leverage the library, the team, the technology into a consumer product. I thought it would take five or six years — it took twelve.”

Chris Velazco/Engadget

Despite knowing that he wanted to make something for consumers, Jannard didn’t set out knowing RED would build a smartphone — especially one that relied so much on unorthodox technology. The decision to go ahead with a phone like the Hydrogen One came from two sources: his understanding that smartphones are profoundly influential in people’s lives, and a strong sense of what he himself wanted to own. Specifically, he couldn’t believe that people weren’t paying attention to the potential of immersive, next-generation displays like the ones RED found in Leia’s labs.

“The idea of 3D [in a smartphone] is not bad, it’s just that was never an implementation as convenient as this,” Jannard said. “You don’t need to wear anything, you don’t need to charge anything — it seems like a no-brainer to me.”

Add a heaping portion of RED’s expertise with cameras and the Hydrogen One was born. It’s a radical departure from the norm and, as a result, it ticks some boxes people didn’t even know they wanted to be ticked. That’s just how Jannard wanted it. He told me he saw the mobile industry becoming mired in a “sea of sameness” and the last thing he wanted to do was wade in himself with something tragically conventional. The Hydrogen One could be a game-changer. It could also be a flop. One thing remains clear, though: RED has built a terribly impressive, wildly ambitious device, and it feels like the very best kind of weird. Regardless of its potential for success, the rest of the industry could take a lesson or two from the Hydrogen One.

Tech

via Engadget http://www.engadget.com

June 2, 2018 at 07:54PM

Microsoft confirms it’s buying GitHub for $7.5 billion

Microsoft confirms it’s buying GitHub for $7.5 billion

https://ift.tt/2xPAvfd


AOL

The rumors are true: Microsoft is buying GitHub, the online, open-source repository for code, for $7.5 billion in stock. “Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to develper freedom, openness and innovation,” CEO Satya Nadella said in a post on the Microsoft blog. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

This is perhaps the biggest signal from Microsoft that it’s committed to moving away from siloing off its work and becoming more open overall. We’ve seen as the company has become a software developer on onetime rival platforms Android and iOS, rather than carrying the dim torch for Windows Mobile. That, and Redmond has put a ton of effort into fostering open-code and open-source initiatives. Buying GitHub is the logical conclusion point of that.

Current Microsoft VP Nat Friedman will take on the role of GitHub CEO. Microsoft expects this purchase to pay out dividends for GitHub, in so much that Redmond predicts that this could boost enterprise adoption of the platform. The purchase price will be paid out in stock, and is expected to be finalized by year’s end. There’s a shareholder’s call at 10am Eastern if you’d like to hear more.

Tech

via Engadget http://www.engadget.com

June 4, 2018 at 08:18AM

This mesh WiFi router can track motion to protect your family

This mesh WiFi router can track motion to protect your family

https://ift.tt/2kSDLx2

Back at CEATEC in October, I came across Origin Wireless and its clever algorithm that can turn any WiFi mesh network into a simple home security plus well-being monitoring system, and that’s without using cameras or wearables — just plug and play. At the time, I saw a working demo that left me impressed, but here at Computex, the company has moved its setup to a real-life environment (a lovely hotel room high up in Taipei), and I was finally able to try its fall detection. Better yet, it turns out that Origin Wireless has already been working with Qualcomm to integrate its technology into the ASUS Lyra router, meaning we’re one step closer to seeing these features outside the lab.

Here’s a quick primer for those who missed the news the first time around. In a nutshell, Origin Wireless’ Time Reversal Machine algorithm relies on the analysis of WiFi multipath signals, as in the unwanted “noise” bounced off the walls. A designated router sends out a probing signal, then another router copies this received signal plus its multipaths and sends it all back but in a backward sequence — hence “time reversal.” All of this is happening 30 times per second — a slight drop from the original 50Hz speed.

Even if you’re not quite following this explanation, all you really need to know is that the software is constantly monitoring for changes between the original signals and the returned signals. This can then generate a signature to reflect the type of environmental changes in the space at any instance.

The algorithm has already been trained with machine learning to recognize specific changes. In this demo, I once again got to experience motion and breathing detection using purely WiFi signals, except that Origin Wireless is now using three ASUS Lyra routers instead of its own prototype boxes. No cameras were needed (the live-video feeds on the monitor were just for display purposes), and I also didn’t need to put on a wearable device.

Most importantly, these don’t require a direct line of sight between the routers and the subjects, as is the case with WiFi itself. By combining both motion and breathing detection, this mesh network is able to moonlight as a sleep-quality monitor, which can be handy for looking after elderly folks.

Even the motion detection alone can be used for home surveillance, and by using more than two routers in the same network, the system is able to give you a rough estimation of where the motion occurred.

What’s new this time is the fall-detection demo. Normally, the challenge with fall detection without the use of cameras or wearables is the fact that when someone falls, it all happens in a split second. But since Origin Wireless’ solution is scanning for changes 30 times per second, this isn’t a problem. Again, with machine learning, the algorithm already knows what kind of signature to expect when someone falls.

With the exception of one false reading after I vacated the space, the fall detection worked well for me in both the bedroom and bathroom. That said, this demo was performed with Origin Wireless’ own engineering routers, so there’s still some work to be done before the feature can be integrated.

Still, according to Chairman and COO Jeng-Feng Lee, Origin Wireless’ technology may end up on WiFi routers later this year by way of equipment vendors, especially those who serve elderly care centers. As for us younger folks, we may not get to enjoy these features at home until sometime next year, but Lee didn’t rule out the possibility of finding a consumer brand that is willing to speed things up a little. After all, this is a purely software-based solution, which can even potentially be added to WiFi mesh routers that you can already buy today.

Click here to catch up on all the latest news from Computex 2018!

Tech

via Engadget http://www.engadget.com

June 6, 2018 at 03:36AM

Microsoft’s deep sea data center is now operational

Microsoft’s deep sea data center is now operational

https://ift.tt/2Ji9eaB


Microsoft

Data centers are hot, noisy and usually inefficiently located. Microsoft’s solution? Put them at the bottom of the sea. Following initial prototype testing, the company’s years-long Project Natick is finally delivering Microsoft’s vision of sustainable, prepackaged and rapidly deployed data centers that operate from the seafloor. Yep. Underwater.

The first such data center has been installed using submarine technology on the seafloor near Scotland’s Orkney Islands, and is already processing workloads via 12 racks of 864 servers. The system requires just under a quarter of a megawatt of power when operating at full capacity, which comes from renewable energy generated onshore. The shipping container-sized set-up also includes cooling technology, but much of the usual logistics and costs surrounding this have been eliminated thanks to the ocean’s naturally low temperatures at depth.

The team will spend the next 12 months monitoring the performance of the data center, keeping tabs on everything from power consumption and internal humidity, to sound and temperature levels – although it’s been designed to operate for at least five years without maintenance. It’ll also keep a close eye on environmental impacts.

The project is born of an increasing demand for cloud computing infrastructure near heavily-populated areas. While putting data centers in the sea might seem counterintuitive, more than half of the world’s population lives within 120 miles of the coast – an area rich with renewable energy potential – so positioning them here means a faster, smoother online experience for local communities. So if all goes to plan, Project Natick could mark the beginning of a completely new way of managing internet connectivity.

Tech

via Engadget http://www.engadget.com

June 6, 2018 at 07:24AM