A closer look at RED’s audacious Hydrogen One phone

A closer look at RED’s audacious Hydrogen One phone

https://ift.tt/2ss6Inz

In a van sitting between the high school from Pretty Little Liars and the Stars Hollow gazebo from Gilmore Girls, RED founder Jim Jannard takes out his smartphone — the Hydrogen One — and starts whipping through demos with me. We’re at AT&T’s Shape entertainment conference at Warner Brothers Studios and this might be the most surreal hands-on experience I’ve ever had with a phone. Then again, this might be the most surreal smartphone I’ve ever used.

Companies have tried building modular smartphones, and have met varying degrees of success — the LG G5 and its “friends” utterly flopped while Motorola continues to push its various Mods. Companies also have tried to build smartphones with eye-popping 3D displays, and they’ve been abject failures. Remember Amazon’s Fire Phone? No one has tried to squeeze both of those gimmicks into a single smartphone except for RED, a company that has only ever made cinema-grade digital cameras. A healthy dose of skepticism about all this isn’t just helpful — it’s required. Fortunately, Jannard isn’t phased by the skepticism. He speaks with the surety of a man with little to lose.

That’s because he doesn’t seem stressed about what will happen when the Hydrogen launches on AT&T and Verizon this August. That’s not because he’s sure it’ll be a massive commercial success, either. It’s because he built the phone of his dreams.

“This is the phone I wanted,” he told me. “If we don’t sell one, I have the world’s most expensive phone but I’m completely happy and satisfied with that.” To underscore his point, he puts things a little more bluntly later in our conversation.

“I’ve got the coolest fucking phone in the world in my pocket,” he said. “And I paid for it.”

Gallery: Red Hydrogen One hands-on | 12 Photos

The first thing you’ll notice about the Hydrogen One is its look — I’d call its style “badass-utilitarian.” While other companies have sought to make stylish phones with glass bodies, the Hydrogen’s aluminum (or titanium) frame features grippy, scalloped edges and a patch of what looks like carbon fiber around its dual camera. It’s a handful, certainly, but it’s not nearly as heavy or as dense as I would’ve expected.

Designs are meant to give you a sense of a product’s character and in this case, that character is very clear: the Hydrogen One is a tool, not a toy. Meanwhile, you’ll find the usual smartphone flourishes in the usual places. There’s a set of volume keys on the left side, a power button and shutter button on the right, and a slot for microSD cards and the SIM up top next to the headphone jack. To get a real sense of why the Hydrogen One is special, you need to see its front and back.

I wish I could show you the Hydrogen One’s face, but I can’t — RED won’t let people take photos or video of the phone’s front since 2D media wouldn’t do justice to the “holographic” display. That’s unfortunate for you, because it seems like RED is really onto something here. When you’re peeking at the homescreen or swiping through your apps in the launcher, the 5.7-inch Quad HD display looks like any other. Fire up some compatible content, however, and the screen springs to life.

Still photos of flowers and fire hydrants Jannard shot with the dual camera seemed to leap off the display, and watching clips from movies like Brave made me feel like the films were unfolding around me. I paused the highlight reel a few times to get a better look, and while you won’t be able to see things that weren’t already there, the added depth gave scenes a sense of realness and presence I’ve never experienced on a smartphone. The so-called 4-View (or 4V) effect is strongest when you’re looking at the screen dead-on, but you’ll still get a sense of it when you peer at the screen from an angle. More importantly, the 3D effect seemed to persist as I moved my head around — an impressive feat when you remember that lenticular 3D looks jumpy and jarring when you switch between different perspectives.

These 4V visuals don’t just apply to videos, either. Jannard showed me a recorded demo of a first-person shooter game that looked a lot like Afterpulse, and when the player lined up the reticle to pop an enemy in the head, the barrel of the scope seemed to zoom toward me. It’s unclear what kind of work developers would have to do to optimize games for the Hydrogen One, but that’s arguably overshadowed by a bigger question: would they even bother to do so for a single phone? The details are still murky. Maybe the most novel demo I tried was a 4V-enabled video chat app, in which I could see my own face — captured by multiple front-facing cameras with the same in-your-face depth as those movie clips. It wasn’t just cool; it was utterly transfixing.

I’m told the heart of the experience is a layer of special material beneath the display capable of bouncing light in more than two directions (sort of like this crazy projector screen I saw at CES) to provide a more pronounced sense of depth. Meanwhile, software running directly on the phone’s Snapdragon 835 is used to effectively fill in the gap between the two perspectives found in traditional 3D content — it’s all happening on the fly and in real-time. This results in the most immersive visuals I’ve ever encountered on a phone. No wonder RED wants you to actually see the Hydrogen One before you draw your conclusions about its screen: words and photos don’t do it justice.

Chris Velazco/Engadget

You can capture and view your own photos in videos in 4V, but that’ll only remain interesting for so long. Given RED’s history in Hollywood, it’s no surprise to hear that the company is aggressively pursuing deals with film studios to get their content libraries up and running on the Hydrogen One. That’s where the Hydrogen Network comes in. A number of potential partnerships are still being worked out, but Jannard confirmed that Lionsgate is on board and will bring its entire 3D library to the Hydrogen One. The process of converting existing 3D content to 4V is apparently quite simple, and if studios made the “maximum amount of tweaks and adjustments,” it would take about 3 hours to make a 4V file out of a 1.5-hour film.

The other way the Hydrogen One stands out is its modular backside: you’ll be able to swap different components onto the phone, like a cinema-grade camera that you can hook existing lenses up to. As mentioned, this isn’t a new idea, but since the Hydrogen One is arguably geared toward people who are used to dropping big bucks on camera gear, it seems like a safer (and more lucrative) approach than I’ve seen from other companies. As far as Jannard is concerned, the limited success achieved by other modular smartphones doesn’t mean the concept itself is flawed — it means that the modules those companies have made aren’t meaningful enough. Jannard wouldn’t elaborate on what other kinds of modules the company plans to build, but he did note that RED is open to working with outside partners to build additional hardware for the Hydrogen platform.

“If there are companies that can add value and we don’t have to do it, we’ll absolutely embrace that,” he said.

RED is being very picky about who it works with to build Hydrogen add-ons, mostly because it wants to keep “crap modules” from being attached to the phone. That said, the company is already making some progress — Jannard confirmed that RED is talking to one potential partner about developing a module, and he thinks it’s “likely to happen.”

A patent filed in 2015 reveals the company’s elaborate mobile vision.

Of all the questions that surround the Hydrogen One, one looms larger than the rest: Why build a smartphone like this? Even mobile incumbents have trouble navigating the market, after all. The answer is a complicated one, and it stems from deeper in the past than you might expect. Before creating RED in the mid-2000s, Jannard was best known as the founder and occasional CEO of Oakley, a company that had spent decades crafting sunglasses, goggles and accessories for active, outdoorsy types. His time at Oakley’s helm gave Jannard a deep appreciation for the process of creating products for regular people, and that hasn’t changed despite a long tenure at his pro-oriented camera company.

“I’m a consumer product guy,” he told me. “When I started RED, I always had the idea that at some point we’d leverage the library, the team, the technology into a consumer product. I thought it would take five or six years — it took twelve.”

Chris Velazco/Engadget

Despite knowing that he wanted to make something for consumers, Jannard didn’t set out knowing RED would build a smartphone — especially one that relied so much on unorthodox technology. The decision to go ahead with a phone like the Hydrogen One came from two sources: his understanding that smartphones are profoundly influential in people’s lives, and a strong sense of what he himself wanted to own. Specifically, he couldn’t believe that people weren’t paying attention to the potential of immersive, next-generation displays like the ones RED found in Leia’s labs.

“The idea of 3D [in a smartphone] is not bad, it’s just that was never an implementation as convenient as this,” Jannard said. “You don’t need to wear anything, you don’t need to charge anything — it seems like a no-brainer to me.”

Add a heaping portion of RED’s expertise with cameras and the Hydrogen One was born. It’s a radical departure from the norm and, as a result, it ticks some boxes people didn’t even know they wanted to be ticked. That’s just how Jannard wanted it. He told me he saw the mobile industry becoming mired in a “sea of sameness” and the last thing he wanted to do was wade in himself with something tragically conventional. The Hydrogen One could be a game-changer. It could also be a flop. One thing remains clear, though: RED has built a terribly impressive, wildly ambitious device, and it feels like the very best kind of weird. Regardless of its potential for success, the rest of the industry could take a lesson or two from the Hydrogen One.

Tech

via Engadget http://www.engadget.com

June 2, 2018 at 07:54PM

Microsoft confirms it’s buying GitHub for $7.5 billion

Microsoft confirms it’s buying GitHub for $7.5 billion

https://ift.tt/2xPAvfd


AOL

The rumors are true: Microsoft is buying GitHub, the online, open-source repository for code, for $7.5 billion in stock. “Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to develper freedom, openness and innovation,” CEO Satya Nadella said in a post on the Microsoft blog. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

This is perhaps the biggest signal from Microsoft that it’s committed to moving away from siloing off its work and becoming more open overall. We’ve seen as the company has become a software developer on onetime rival platforms Android and iOS, rather than carrying the dim torch for Windows Mobile. That, and Redmond has put a ton of effort into fostering open-code and open-source initiatives. Buying GitHub is the logical conclusion point of that.

Current Microsoft VP Nat Friedman will take on the role of GitHub CEO. Microsoft expects this purchase to pay out dividends for GitHub, in so much that Redmond predicts that this could boost enterprise adoption of the platform. The purchase price will be paid out in stock, and is expected to be finalized by year’s end. There’s a shareholder’s call at 10am Eastern if you’d like to hear more.

Tech

via Engadget http://www.engadget.com

June 4, 2018 at 08:18AM

This mesh WiFi router can track motion to protect your family

This mesh WiFi router can track motion to protect your family

https://ift.tt/2kSDLx2

Back at CEATEC in October, I came across Origin Wireless and its clever algorithm that can turn any WiFi mesh network into a simple home security plus well-being monitoring system, and that’s without using cameras or wearables — just plug and play. At the time, I saw a working demo that left me impressed, but here at Computex, the company has moved its setup to a real-life environment (a lovely hotel room high up in Taipei), and I was finally able to try its fall detection. Better yet, it turns out that Origin Wireless has already been working with Qualcomm to integrate its technology into the ASUS Lyra router, meaning we’re one step closer to seeing these features outside the lab.

Here’s a quick primer for those who missed the news the first time around. In a nutshell, Origin Wireless’ Time Reversal Machine algorithm relies on the analysis of WiFi multipath signals, as in the unwanted “noise” bounced off the walls. A designated router sends out a probing signal, then another router copies this received signal plus its multipaths and sends it all back but in a backward sequence — hence “time reversal.” All of this is happening 30 times per second — a slight drop from the original 50Hz speed.

Even if you’re not quite following this explanation, all you really need to know is that the software is constantly monitoring for changes between the original signals and the returned signals. This can then generate a signature to reflect the type of environmental changes in the space at any instance.

The algorithm has already been trained with machine learning to recognize specific changes. In this demo, I once again got to experience motion and breathing detection using purely WiFi signals, except that Origin Wireless is now using three ASUS Lyra routers instead of its own prototype boxes. No cameras were needed (the live-video feeds on the monitor were just for display purposes), and I also didn’t need to put on a wearable device.

Most importantly, these don’t require a direct line of sight between the routers and the subjects, as is the case with WiFi itself. By combining both motion and breathing detection, this mesh network is able to moonlight as a sleep-quality monitor, which can be handy for looking after elderly folks.

Even the motion detection alone can be used for home surveillance, and by using more than two routers in the same network, the system is able to give you a rough estimation of where the motion occurred.

What’s new this time is the fall-detection demo. Normally, the challenge with fall detection without the use of cameras or wearables is the fact that when someone falls, it all happens in a split second. But since Origin Wireless’ solution is scanning for changes 30 times per second, this isn’t a problem. Again, with machine learning, the algorithm already knows what kind of signature to expect when someone falls.

With the exception of one false reading after I vacated the space, the fall detection worked well for me in both the bedroom and bathroom. That said, this demo was performed with Origin Wireless’ own engineering routers, so there’s still some work to be done before the feature can be integrated.

Still, according to Chairman and COO Jeng-Feng Lee, Origin Wireless’ technology may end up on WiFi routers later this year by way of equipment vendors, especially those who serve elderly care centers. As for us younger folks, we may not get to enjoy these features at home until sometime next year, but Lee didn’t rule out the possibility of finding a consumer brand that is willing to speed things up a little. After all, this is a purely software-based solution, which can even potentially be added to WiFi mesh routers that you can already buy today.

Click here to catch up on all the latest news from Computex 2018!

Tech

via Engadget http://www.engadget.com

June 6, 2018 at 03:36AM

Microsoft’s deep sea data center is now operational

Microsoft’s deep sea data center is now operational

https://ift.tt/2Ji9eaB


Microsoft

Data centers are hot, noisy and usually inefficiently located. Microsoft’s solution? Put them at the bottom of the sea. Following initial prototype testing, the company’s years-long Project Natick is finally delivering Microsoft’s vision of sustainable, prepackaged and rapidly deployed data centers that operate from the seafloor. Yep. Underwater.

The first such data center has been installed using submarine technology on the seafloor near Scotland’s Orkney Islands, and is already processing workloads via 12 racks of 864 servers. The system requires just under a quarter of a megawatt of power when operating at full capacity, which comes from renewable energy generated onshore. The shipping container-sized set-up also includes cooling technology, but much of the usual logistics and costs surrounding this have been eliminated thanks to the ocean’s naturally low temperatures at depth.

The team will spend the next 12 months monitoring the performance of the data center, keeping tabs on everything from power consumption and internal humidity, to sound and temperature levels – although it’s been designed to operate for at least five years without maintenance. It’ll also keep a close eye on environmental impacts.

The project is born of an increasing demand for cloud computing infrastructure near heavily-populated areas. While putting data centers in the sea might seem counterintuitive, more than half of the world’s population lives within 120 miles of the coast – an area rich with renewable energy potential – so positioning them here means a faster, smoother online experience for local communities. So if all goes to plan, Project Natick could mark the beginning of a completely new way of managing internet connectivity.

Tech

via Engadget http://www.engadget.com

June 6, 2018 at 07:24AM

Experimental drone uses AI to spot violence in crowds

Experimental drone uses AI to spot violence in crowds

https://ift.tt/2M3J1u5


Amarjot Singh, YouTube

Drone-based surveillance still makes many people uncomfortable, but that isn’t stopping research into more effective airborne watchdogs. Scientists have developed an experimental drone system that uses AI to detect violent actions in crowds. The team trained their machine learning algorithm to recognize a handful of typical violent motions (punching, kicking, shooting and stabbing) and flag them when they appear in a drone’s camera view. The technology could theoretically detect a brawl that on-the-ground officers might miss, or pinpoint the source of a gunshot.

As The Verge warned, the technology definitely isn’t ready for real-world use. The researchers used volunteers in relatively ideal conditions (open ground, generous spacing and dramatic movements). The AI is 94 percent effective at its best, but that drops down to an unacceptable 79 percent when there are ten people in the scene. As-is, this system might struggle to find an assailant on a jam-packed street — what if it mistakes an innocent gesture for an attack? The creators expect to fly their drone system over two festivals in India as a test, but it’s not something you’d want to rely on just yet.

There’s a larger problem surrounding the ethical implications. There are questions about abuses of power and reliability for facial recognition systems. Governments may be tempted to use this as an excuse to record aerial footage of people in public spaces, and could track the gestures of political dissidents (say, people holding protest signs or flashing peace symbols). It could easily combine with other surveillance methods to create a complete picture of a person’s movements. This might only find acceptance in limited scenarios where organizations both make it clear that people are on camera and with reassurances that a handshake won’t lead to police at their door.

Tech

via Engadget http://www.engadget.com

June 6, 2018 at 09:06PM

‘Psychopath AI’ Offers A Cautionary Tale for Technologists

‘Psychopath AI’ Offers A Cautionary Tale for Technologists

https://ift.tt/2JyW5sL

Researchers at MIT have created a psychopath. They call him Norman. He’s a computer.
Actually, that’s not really right. Though the team calls Norman a psychopath (and the chilling lead graphic on their homepage certainly backs that up), what they’ve really created is a monster.
Tell Us What You See
Norman has just one task, and that’s looking at pictures and telling us what he thinks about them. For their case study, the researchers use Rorschach inkblots, and Norman has some pretty gru

Tech

via Discover Main Feed https://ift.tt/1dqgCKa

June 7, 2018 at 03:45PM

Google reportedly won’t renew controversial drone imaging program

Google reportedly won’t renew controversial drone imaging program

https://ift.tt/2kJS2Mi

Enlarge /

Orion is a military drone that can fly for five days with 1,000 pounds of payload. Aurora says it can perform surveillance missions 3,000 miles from home base.

Aurora

It looks like the drama surrounding Google’s controversial involvement in Project Maven is coming to an end. Yet another report from Gizmodo on the subject says that Google won’t be renewing the project once its current contract runs out.

Project Maven is an initiative from the Department of Defense, which aims to “accelerate DoD’s integration of big data and machine learning.” The DoD has millions of hours of drone footage that pour in from around the world, and having humans comb through it for “objects of interest” isn’t a scalable proposition. So Maven recruited several tech firms for image recognition technology that could be used to identify objects of interest in the footage. As one of the leading AI firms, Google signed on to the project with a contract that reportedly lasts until 2019.

Tech

via Ars Technica https://arstechnica.com

June 1, 2018 at 04:38PM