Bose sunglasses hands-on: audio AR makes more sense than you think

Bose sunglasses hands-on: audio AR makes more sense than you think

http://ift.tt/2Ihll3l

This week, Bose made a surprise announcement that it was getting into the augmented reality game. But Bose makes headphones, right? And AR is all about glasses with visual overlays? Well, nobody told them, and that’s a good thing. The company believes that the classic approach works fine for many things, but it still presents barriers (cost of specific hardware, battery life and so on).

Visual distractions also aren’t always appropriate, and sometimes all you need is relevant info — restaurant opening times, points of interest, for example — whispered in your ear. That’s what Bose is offering, and we (me and my colleague Cherlynn Low in the pictures and video above) tried it out for ourselves in downtown Austin at SXSW.

When Bose announced its AR intentions, it did so with a pair of sunglasses, not headphones. This might lead you to think there’s still a visual component, but there isn’t. The reason Bose chose a pair of specs is because a set of "smart headphones" would be predictable, and Bose wanted to shake things up a bit. So, it put its technology in sunglasses to show that it can be used in any kind of head-worn wearable, opening it up to all sorts of possibilities.

When a Bose representative handed me a pair of the glasses, I asked if they used bone-conduction for the audio, but he said no. I slipped them on, and instantly heard music. It had been playing before I put them on, but I hadn’t realized, as it was barely audible until the glasses were sitting on my ears. Bose says it worked on a super-thin mini speaker that "projects" audio into your ears, and was designed with this specific project in mind.

I’ll be honest, Cherlynn and I were both pretty impressed with just the idea of music-playing sunglasses as they were, we hadn’t even moved on to the AR demo yet. The sound quality was very impressive and there was a built-in microphone for answering calls. The glasses were 3D-printed prototypes, but were still light and comfortable to wear.

The AR element works thanks to a nine-axis IMU sensor that, in combination with your phone’s GPS, knows where you are and exactly what direction you’re looking in.

Before we headed out into the world, Bose played us some example audio with local information, or opening times, and demonstrated direction-specific information being played only in the one ear ("to your left is the train station" for example). Those ideas are somewhat possible with a phone and headphones already, the point here is that you will be able to look at something, and call up information about it on request.

To test this for real, Bose took us out onto Austin’s bustling Rainey Street, a lively spot filled with quirky bars and eateries. At the top of the street, I looked at a block of apartments, and double-tapped the side of the glasses (the gesture programmed to call up info for our demo). Initially I was told there was no information available. But I then turned around and looked at a restaurant called "El Naranjo," double tapped again, and was told the name, the chef, where they trained, opening hours, how long people typically stayed there for and the type of cuisine (Mexican). I repeated this all the way down the street, looking at different businesses, and the glasses responded with impressive accuracy.

Of course, this information was just a demo created by Bose; it’s the technology that’s important. All I can say is that it worked pretty well. Only once did I get info on a bar next to the one I was actually looking at, and that was rectified by a slight adjustment of my head to get my target central to my gaze. Oh, and all the while, I had music playing in my ears, which would dip in volume as information was served up. Bose said that, when using this technology in actual headphones with noise cancellation, developers would be able to focus your attention to alerts etc, by "turning off" ambient noise around you to make sure you hear important details.

And that’s a key point to mention here. Bose isn’t trying to invent everything here (though it does of course plan to use this in its own headphones). It wants product-makers, app developers and creators to use its technology however they want. Training apps could use it to tell you where popular cycle routes are, or even where other runners are relative to you during a race. Other natural fits for the technology include travel info and reviews, of course, but this could just as easily be applied to games and language learning and beyond.

To encourage companies to adopt Bose AR, the audio firm has a pool of $50 million up for grabs to entice developers. So, whether you’re working on a dating app, a food delivery service or anything that could profit from location-specific information, then know that Bose appears to be serious about making it mainstream.

Audio and AR aren’t entirely strangers. We’ve seen rumblings from companies like Harman, Here and games like good ole’ Pokémon Go have all dabbled in augmenting sound in our environment. What Bose seems to be doing differently is making it useful and ubiquitous. By knowing what you’re looking at, and being able to control with gestures (touch, voice recognition or nodding for example) you can interact with apps intuitively without looking at your phone. Whether this is a technology easily replicated by giants like Google (it’s be perfect for Pixel Buds) or Apple remains to be seen.

It’s worth noting that the demo we were given isn’t an indicator of what it might actually be like in real life. The world is big, maps are inaccurate, and sensors can be fooled and confused. But it’s a promising start. If Bose can lure those developers over, and get its platform into a variety of devices, simply looking at something could be the go-to way of learning about the world.

Catch up on the latest news from SXSW 2018 right here.

Tech

via Engadget http://www.engadget.com

March 10, 2018 at 07:06PM

FBI arrests CEO of company selling custom BlackBerrys to gangs

FBI arrests CEO of company selling custom BlackBerrys to gangs

http://ift.tt/2FJNyRO

Custom, extra-secure BlackBerry phones remain a staple of the criminal underworld, and a recent bust just illustrated this point. Motherboard has learned that the FBI arrested Vincent Ramos, the founder of the well-established phone mod seller Phantom Secure, for allegedly aiding criminal organizations that include the Sinaloa drug cartel. The company altered BlackBerry and Android devices to disable common features (including the camera and web browsing) while adding Pretty Good Privacy for encrypted conversations. And it wasn’t just turning a blind eye to the shady backgrounds of its customers, according to investigators — it was fully aware of who was involved.

Reportedly, undercover agents running a sting operation not only heard Ramos say that buying one of his phones was "totally fine," but that the phones were modified "specifically" with drug trafficking in mind. It even singled out Hong Kong and Panama as areas it thought would be "uncooperative" with police. A convicted Sinaloa cartel member also stated that the gang had bought Phantom’s phones to conduct its drug trafficking business. The FBI estimated that there were as many as 20,000 of these handsets around the world, half of them in Australia with others selling in countries like Cuba, Mexico and Venezuela.

Neither the FBI nor Ramos’ attorney has commented on the case.

The arrest highlights the perpetual dilemma with encrypted communication. While encryption is vital to preserving privacy, there are people who will exploit tough-to-crack communications to conduct shady business. And there’s no easy answer. Despite what officials say, there’s no such thing as an encryption backdoor — a vulnerability that’s open to police is also open to hackers. Operations like Phantom Secure may be difficult to completely avoid so long as there’s a serious interest in secure data.

Source: Motherboard

Tech

via Engadget http://www.engadget.com

March 11, 2018 at 01:24AM

Fortnite Will Have Cross-Play Between Xbox One, PC, Mobile Versions

Fortnite Will Have Cross-Play Between Xbox One, PC, Mobile Versions

http://ift.tt/2Dj03i0

Epic Games has announced that its Battle Royale game, Fortnite, will be updated to support "cross-play, cross-progression, and cross-purchase" between the Xbox One, PC, Mac, and iOS versions of the game. Furthermore, support for these features will be added to the Android versions "in the next few months."

"Contrary to what may have been implied, Microsoft has long been a leading voice in supporting cross-platform play, connecting players across PC, mobile and all consoles," reads a post on the Epic Games website. "We’ve been working together with them over the last several months to make this possible, and will bring this functionality to Fortnite players on Xbox right along with other platforms.

"With each new platform we support and every update we ship, we strive to bring Fortnite to more people, and make it easier to play together with friends. And, as always, cross-play is opt in."

Fortnite has been available on Xbox One and PC for some time now, but mobile versions of Fortnite were announced on March 10. According to Epic, Fortnite mobile is “the same 100-player game you know from Xbox One, PlayStation 4, PC, and Mac. Same gameplay, same map, same content, same weekly updates.”

The announcement follows previous confirmation that, thanks to a partnership with Sony, Fortnite will support cross-play and cross-progression between PS4, PC, Mac, iOS, and (eventually) Android. Sign-ups for Fortnite’s mobile version will open on March 12 and will there will be an invite-only test on iOS and Android "in the next few months."

Those that are selected to participate will receive an invite "shortly thereafter" signing up by email. More invites will be sent out over the coming months, so if you don’t get in right away, you’ll have more opportunities later. If you do get in, you’ll get codes to share with friends, so that’s good.

PUBG, which likely inspired Fortnite: Battle Royale, also has a mobile edition–but it’s only available in China and it does not support cross-play. PUBG is also not available on PS4, which might also explain Sony’s apparent eagerness to partner with Epic for Fortnite: Battle Royale.

Fortnite has gained ground on PUBG in a big way recently, which could spell trouble for the game. Check out the video above to see GameSpot’s Mike Mahardy, Michael Higham, Nick Margherita, and Jake Dekker discuss the current state of PUBG and whether it needs to evolve to stay competitive.

Games

via GameSpot’s PC Reviews http://ift.tt/2mVXxXH

March 10, 2018 at 02:29PM

AI Has a Hallucination Problem That’s Proving Tough to Fix

AI Has a Hallucination Problem That’s Proving Tough to Fix

http://ift.tt/2FFWEis

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

Human readers of WIRED will easily identify the image below, created by Athalye, as showing two men on skis. When asked for its take Thursday morning, Google’s Cloud Vision service reported being 91 percent certain it saw a dog. Other stunts have shown how to make stop signs invisible, or audio that sounds benign to humans but is transcribed by software as “Okay Google browse to evil dot com.”

LabSix

So far, such attacks have been demonstrated only in lab experiments, not observed on streets or in homes. But they still need to be taken seriously now, says Bo Li, a postdoctoral researcher at Berkeley. The vision systems of autonomous vehicles, voice assistants able to spend money, and machine learning systems filtering unsavory content online all need to be trustworthy. “This is potentially very dangerous,” Li says. She contributed to research last year that showed attaching stickers to stop signs could make them invisible to machine learning software.

Li co-authored one of the papers reviewed by Athalye and his collaborators. She and others from Berkeley described a way to analyze adversarial attacks, and showed it could be used to detect them. Li is philosophical about Athalye’s project showing the defense is porous, saying such feedback helps researchers make progress. “Their attack shows that there are some problems we need to take into account,” she says.

Yang Song, the lead author of a Stanford study included in Athalye’s analysis, declined to comment on the work, since it is undergoing review for another major conference. Zachary Lipton, a professor at Carnegie Mellon University and coauthor of another paper that included Amazon researchers, said he hadn’t examined the analysis closely, but finds it plausible that all existing defenses can be evaded. Google declined to comment on the analysis of its own paper. A spokesperson for the company highlighted Google’s commitment to research on adversarial attacks, and said updates are planned to the company’s Cloud Vision service to defend against them.

To build stronger defenses against such attacks, machine learning researchers may need to get meaner. Athalye and Biggio say the field should adopt practices from security research, which they say has a more rigorous tradition of testing new defensive techniques. “People tend to trust each other in machine learning,” says Biggio. “The security mindset is exactly the opposite, you have to be always suspicious that something bad may happen.”

A major report from AI and national security researchers last month made similar recommendations. It advised those working on machine learning to think more about how the technology they are creating could be misused or exploited.

Protecting against adversarial attacks will probably be easier for some AI systems than others. Biggio says that learning systems trained to detect malware should be easier to make more robust, for example, because malware must be functional, limiting how varied it can be. Protecting computer-vision systems is much more difficult, Biggio says, because the natural world is so varied, and images contain so many pixels.

Solving that problem—which could challenge designers of self-driving vehicles—may require a more radical rethink of machine-learning technology. “The fundamental problem I would say is that a deep neural network is very different from a human brain,” says Li.

Humans aren’t immune to sensory trickery. We can be fooled by optical illusions, and a recent paper from Google created weird images that tricked both software and humans who glimpsed them for less than a tenth of a second to mistake cats for dogs. But when interpreting photos we look at more than patterns of pixels, and consider the relationship between different components of an image, such as the features of a person’s face, says Li.

Google’s most prominent machine-learning researcher, Geoff Hinton, is trying to give software that kind of ability. He thinks that would allow software to learn to recognize something from just a few images, not thousands. Li thinks software with a more human view of the world should also be less susceptible to hallucinations. She and others at Berkeley have begun collaborating with neuroscientists and biologists to try and take hints from nature.

AI Exploitation

Tech

via Wired Top Stories http://ift.tt/2uc60ci

March 9, 2018 at 06:18AM

GE hopes giant grid batteries can save the planet (and its fortunes)

GE hopes giant grid batteries can save the planet (and its fortunes)

http://ift.tt/2D7sbVv

GE hopes giant grid batteries can save the planet (and its fortunes)

The engineering firm hopes a storage system can smooth electricity supply on a grid powered by renewables—and let it capitalize on a fast-growing market.

The news: GE announced a new system called Reservoir—a large battery, attached to the grid to store spare electricity from renewables. When the sun doesn’t shine and wind doesn’t blow, it dumps electrons into the grid to meet demand. It use discharging smarts to extend its working life, and is modular, so it can be used for small or large gigs.

Electric potential: Grid storage is growing—fast. The Energy Storage Association predicts US storage in use will double over 2018. The Wall Street Journal notes that the market is predicted to be worth tens of billions of dollars in the next 10 years. GE, which is struggling financially, will hope Reservoir grabs it a slice of the pie.

But: There’s competition. Tesla has its own grid batteries that are deployed in South Australia as part of a huge 129-megawatt system. A Siemens venture called Fluence is building a system in California that will be three times as big. GE, meanwhile, has just a 20-megawatt commitment for Reservoir so far.

Tech

via Technology Review Feed – Tech Review Top Stories http://ift.tt/1XdUwhl

March 7, 2018 at 09:42AM

To spot fire damage from space, point this AI at satellite imagery

To spot fire damage from space, point this AI at satellite imagery

http://ift.tt/2IbHS1v

To spot fire damage from space, point this AI at satellite imagery

A new deep-learning algorithm can identify the devastation of fires by studying aerial photographs.

How it works: From satellite images taken before and after the California wildfires of 2017, researchers created a dataset of buildings that were either damaged or left unscathed.

The results: They tweaked a pre-trained ImageNet neural network and got it to spot damaged buildings with an accuracy of up to 85 percent.

Why it matters: After a disaster, pinpointing the most hard-hit areas could save lives and help with relief efforts. The researchers also released the dataset to the public, which could improve other research that requires satellite images, like conservation and developmental aid work.

Image credit:

  • European Space Agency | Flickr

Tech

via Technology Review Feed – Tech Review Top Stories http://ift.tt/1XdUwhl

March 8, 2018 at 08:40AM

MIT researchers say nuclear fusion will feed the grid “in 15 years”

MIT researchers say nuclear fusion will feed the grid “in 15 years”

http://ift.tt/2G7G09f

MIT researchers say nuclear fusion will feed the grid “in 15 years”

Image credit:

  • Bob Mumgaard/Plasma Science and Fusion Center

Tech

via Technology Review Feed – Tech Review Top Stories http://ift.tt/1XdUwhl

March 9, 2018 at 09:16AM