Google’s New Interpreter Mode Translates Your Conversation

https://www.wired.com/story/google-assistant-interpreter-mode


Over the past year Google has been making its virtual assistant, the eponymous Google Assistant, more capable of handling what might usually be awkward or onerous conversations. Need to make a dinner reservation by picking up the phone and speaking to a real live human being? Google Assistant can do that for you, as creepy as it might seem. Need to screen a call that you suspect is from a spam caller, for the twenty-seventh time that day? The Assistant will take care of that too.

Now Google is trying to outsource another human-to-human interaction: the kind that occurs between a person who works in hospitality and a guest who speaks a different language. A new feature in Google Assistant, called Interpreter Mode, turns the virtual assistant into a real-time language translator between two people who are trying to chat in the same physical space. It starts rolling out today on Google-powered smart displays and smart speakers.

The company showed off the new feature to members of the press in a late-night demo in Las Vegas, hours before the CES show doors officially opened. A concierge at Caesar’s Palace, one of the early beta testers of the feature, was approached by a German “tourist” (really, a German-speaking Google employee) and asked about show tickets. The concierge turned to a Google Home Hub and, using voice, prompted the Assistant to go into German interpreter mode. The concierge and guest had a back-and-forth conversation, with the Assistant translating, and tickets were procured.

During a demo with WIRED, the Assistant mistranslated at one point—though the translated text also appeared on the seven-inch smart display, so both men were able to use context clues to figure out what the other was asking. (Humans! So clever.) The conversation also didn’t feel completely frictionless, since the Assistant takes a second or two to translate in between each person’s remarks. But the brief interaction we saw still pointed toward a future in which Babel fish-like translators exist at any kind of service desk where language could potentially become a barrier.

Google already offers near-instantaneous translations on the web and on mobile with Google Translate. And when it released its wire-free Pixel Buds headphones a couple of years ago it introduced the concept of language translation in near real time, with the tap of a button. That same translation feature later came to all Google Assistant-optimized headphones. But that doesn’t always work so well; primarily because it’s an isolating experience, and only the person wearing the headphones hears the translation. And Google Translate requires you to open an app first.

The Interpreter feature is launching today as a small pilot at a few hotels; one in New York, another in San Francisco, and Caesar’s Palace in Vegas. It will only be available on the Google Home Hub, Google Home speakers, and third-party Google Assistant displays.

Google also said that the Assistant would now work within Google Maps, so you can use your voice to reply to texts or send your ETA while you’re driving. Amazon’s Alexa, meanwhile, has appeared in literally dozens of new products so far at the show, ranging from lighting kits to “smart” beds to a voice-controlled toilet.

While CES is primarily a place to gape at new hardware, tech giants Google, Amazon, and even Apple (which doesn’t exhibit at CES) have been sucking up a fair amount of air in the room with their software announcements—further proof that the platforms that enable these connected products are just as important as the gadgets themselves.


via Wired Top Stories http://bit.ly/2uc60ci

January 8, 2019 at 01:06PM

The Clever Clumsiness of a Robot Teaching Itself to Walk

https://www.wired.com/story/the-clever-clumsiness-of-a-robot-teaching-itself-to-walk


It’s easy to watch a baby finally learn to walk after hours upon hours of trial and error and think, OK, good work, but do you want a medal or something? Well, maybe only a childless person like me would think that, so credit where credit is due: It’s supremely difficult for animals like ourselves to manage something as everyday as putting one foot in front of the other.

It’s even more difficult to get robots to do the same. It used to be that to make a machine walk, you either had to hard-code every command or build the robot a simulated world in which to learn. But lately, researchers have been experimenting with a novel way to go about things: Make robots teach themselves how to walk through trial and error, like babies, navigating the real world.

Researchers at UC Berkeley and Google Brain just took a big step (sorry) toward that future with a quadrupedal robot that taught itself to walk in a mere two hours. It was a bit ungainly at first, but it essentially invented walking on its own. Not only that, the researchers could then introduce the machine to new environments, like inclines and obstacles, and it adapted with ease. The results are as awkward as they are magical, but they could lead to machines that explore the world without us having to coddle them.

The secret ingredient here is a technique called maximum-entropy reinforcement learning. Entropy in this context means randomness—lots of it. The researchers give the robot a digital reward for doing something random that ends up working well. So in this case, the robot is rewarded for achieving forward velocity, meaning it’s trying new things and inching forward bit by bit. (A motion-capture system in the lab calculated the robot’s progress.)

Problem, though: “The best way to maximize this reward initially is just to dive forward,” says UC Berkeley computer scientist Tuomas Haarnoja, lead author on a new preprint paper detailing the system. “So we need to penalize for that kind of behavior, because it would make the robot immediately fall.”

Another problem: When researchers want a robot to learn, they typically run this reinforcement learning process in simulation first. The digital environment approximates the physics and materials of the real world, allowing a robot’s software to rapidly conduct numerous trials using powerful computers.

Researchers use “hyperparameters” to get the algorithm to work with a particular kind of simulated environment. “We just need to try different variations of these hyperparameters and then pick the one that actually works,” says Haarnoja. “But now that we are dealing with the real-world system, we cannot afford testing too many different settings for these hyperparameters.” The advance here is that Haarnoja and his colleagues have developed a way to automatically tune hyperparameters. “That makes experimenting in the real world much more feasible.”

Tuomas Haarnoja

Learning in the real world instead in a software simulation is much slower—every time it fell, Haarnoja had to physically pick up the four-legged robot and reset it, perhaps 300 times over the course of the two-hour training session. Annoying, yes, but not as annoying as trying to take what you’ve learned in a simulation—which is an imperfect approximation of the real world—and get it to work nicely in a physical robot.

Also, when researchers train the robot in simulation first, they’re explicit about what that digital environment looks like. The physical world, on the other hand, is much less predictable. So by training the robot in the real, if controlled, setting of a lab, Haarnoja and his colleagues made the machine more robust to variations in the environment.

Plus, this robot had to deal with small perturbations during its training. “We have a cable connected to the batteries, and sometimes the cable goes under the legs, and sometimes when I manually reset the robot I don’t do it properly,” says Haarnoja. “So it learns from those perturbations as well.” Even though training in simulation comes with great speed, it can’t match the randomness of the real world. And if we want our robots to adapt to our homes and streets on their own, they’ll have to be flexible.

“I like this work because it convincingly shows that deep reinforcement learning approaches can be employed on a real robot,” says OpenAI engineer Matthias Plappert, who has designed a robotic hand to teach itself to manipulate objects. “It’s also impressive that their method generalizes so well to previously unseen terrains, even though it was only trained on flat terrain.”

“That being said,” he adds, “learning on the physical robot still comes with many challenges. For more complex problems, two hours of training will likely not be enough.” Another hurdle is that training robots in the real world means they can hurt themselves, so researchers have to proceed cautiously.

Still, training in the real world is a powerful way to get robots to adapt to uncertainty. This is a radical departure from something like a factory robot, a brute that follows a set of commands and works in isolation so as not to fling its human coworkers across the room. Out in the diverse and unpredictable environments beyond the factory, though, the machines will have to find their own way.

“If you want to send a robot to Mars, what will it face?” asks University of Oslo roboticist Tønnes Nygaard, whose own quadrupedal robot learned to walk by “evolving.” “We know some of it, but you can’t really know everything. And even if you did, you don’t want to sit down and hard-code every way to act in response to each.”

So, baby steps … into space!


More Great WIRED Stories

via Wired Top Stories http://bit.ly/2uc60ci

January 8, 2019 at 02:42PM

Hyundai Elevate EV is a walking electric vehicle concept

https://www.autoblog.com/2019/01/08/hyundai-elevate-ev-walking-car-official/


Last week

Hyundai

released some

teaser material of its walking Elevate concept

, designed to negotiate “treacherous” areas a conventional wheeled vehicle would struggle to get ahead. At CES 2019, the UMV – for Ultimate Mobility Vehicle – saw its official debut.

Hyundai calls the Elevate

EV

the world’s first vehicle with moveable legs. It still has wheels, too, but they are attached to the ends of what can only really be described as legs, so that the vehicle can find footing on collapsed and destroyed ground after an earthquake, a hurricane or a flood. The leg architecture has “five degrees of freedom”, according to Hyundai, as well as hub propulsion motors and actuators.

Crucially, it can also fold up to cruise at highway speeds, after scaling a five-foot wall or reaching over a five-foot gap, or freeing itself from snow. “Imagine a car stranded in a snow ditch just 10 feet off the highway being able to walk or climb over the treacherous terrain, back to the road potentially saving its injured passengers – this is the future of vehicular mobility”, said designer David Byron from Hyundai’s design partner, Sundberg-Ferar.

“When a tsunami or earthquake hits, current rescue vehicles can only deliver first responders to the edge of the debris field. They have to go the rest of the way by foot. Elevate can drive to the scene and climb right over flood debris or crumbled concrete,” said John Suh, Hyundai vice president and head of Hyundai CRADLE, or Center for Robotic-Augmented Design in Living Experiences.

The symmetrically designed cabin can be swapped out depending of the application: in addition to a rescue vehicle, the Elevate can also be kitted out to function as a taxi, capable of reaching the front door of a brownstone with stairs, as shown in the slideshow. “This technology goes well beyond emergency situations – people living with disabilities worldwide that don’t have access to an ADA ramp could hail an autonomous Hyundai Elevate that could walk up to their front door, level itself, and allow their wheelchair to roll right in – the possibilities are limitless”, continued Suh.

Related Video:

via Autoblog http://bit.ly/1afPJWx

January 8, 2019 at 01:02PM

Hyundai’s Elevate Concept Uses Legs and Wheels to Go Anywhere

https://www.wired.com/story/hyundai-walking-car-elevate-concept-ces


Like the sushirrito and the Rollie Eggmaster, the wheel is the certified product of the human brain, unmatched in ingenuity by eons of evolution.

Now, after 5,000 years of transportation advances built on the ability to roll from A to B, Hyundai has decided it’s time to move on. Today at CES in Las Vegas, the automaker fleshed out the details on an insect-like concept car that isn’t limited by its wheels. This thing also has legs, which allow it to go where there are no roads, by trekking or climbing over difficult terrain, fording rivers, clambering over crumbled concrete, or even climbing stairs.

In this city without restraint, CES is a safe space to showcase outrageous concepts unlikely to make it to production. But Hyundai has thought through a business case for the machine it’s calling Elevate, or the Ultimate Mobility Vehicle (how about the ummmm… v?). It pitches the blend of car, robot, and Mars rover as the ideal machine for first responders. While a car or truck would get stumped at the edge of a debris field of broken buildings, for example, the Elevate can just clamber on over, to the heart of the problem, instead of leaving fire fighters or whoever else to trek in on foot. Hyundai says with a modular platform, the body atop the walking wheels could be swapped out for different applications. It also shows a taxi concept that can climb entrance steps to a building, to allow wheelchair users to roll in and out easily.

The platform itself puts the four wheels on the ends of robot legs with five degrees of freedom (meaning they can move in just about any direction). Propulsion comes from electric motors mounted inside each wheel hub, like on the Mars Curiosity Rover.

When the legs are folded under the vehicle, the Elevate can travel at highway speeds, almost resembling a normal car. But it looks cleverest, and scariest, when it rises up to full height, using the wheels as feet. It can replicate the walking patterns of both mammals and reptiles, so it can stride across most terrains confidently, even snow and ice, with the wheels turned sideways as non-slip pads. “Imagine a car stranded in a snow ditch 10 feet off the highway being able to walk or climb back to the road,” says David Byron, design manager at Sundberg-Ferar, the Detroit based design studio that worked with Hyundai to develop the concept.

Hyundai says with a modular platform, the body atop the walking wheels could be swapped out for different applications.

Hyundai

Concept is the key word. Hyundai is vague on whether this thing would be autonomous or require a human at the controls, but it’s worth noting just how hard moving a robot through the world really is. As Boston Dynamics CEO Marc Raibert said at last year’s WIRED25 conference, the robot shop’s viral videos—starring dancing quadrupeds and parkour-ing humanoids—showcase the rare successful attempts, not the many screw ups they took along the way. And although the Curiosity Rover has lasted nearly three times its designed lifespan, on a hostile alien planet, it’s not a great model for a commuter machine: It has covered just 12 miles in six years.

If the promised revolution in mobility due to autonomous, electric, connected cars (and scooters) actually arrives, it will bring changes in the way that we use vehicles in cities. So although this concept it outlandish, it doesn’t hurt to start thinking about new ways to build vehicles, and maybe even reinvent the wheel.

It’s a better idea, anyway, than trying to cook eggs in a tube.


via Wired Top Stories http://bit.ly/2uc60ci

January 7, 2019 at 05:06PM

One day, 199 tornadoes

https://www.popsci.com/2011-tornado-outbreak?dom=rss-default&src=syn


In 2011, an outbreak—not of disease, but of tornadoes—slammed through a large swath of the Southern and Eastern U.S. The region saw 64 individual twisters on April 25, and another 50 the next day, but the worst still lay ahead. A head-spinning 199 tornadoes touched down on April 27, killing 316 and injuring almost 3,000. To compare this cluster to others, meteorologists use the Destruction Potential Index. DPI is the sum of each individual storm’s destructive ­capa­city, which weather-folk quantify by multiplying each tempest’s wind power by the area it hits. April 27’s outbreak reached 21,980. That’s three times the next-most devastating event in 2010.

Disasters like these are getting worse. From 1954 to 1963, the average outbreak—a sequence of six or more F1 or greater tornadoes that begin within six hours of each other—contained 11.4 storms. Half a century later, that number had risen to 16.1. Researchers are working to determine which weather and climate factors drive the clobbering hordes, but they can’t be certain quite yet. What we do know is that it’s increasingly likely we’ll have a day like the one mapped out above again.


Related: In a Montana town, a record-breaking 103-degree swing in 24 hours


This article was originally published in the Winter 2018 Danger issue of Popular Science.

via Popular Science – New Technology, Science News, The Future Now http://bit.ly/2k2uJQn

January 8, 2019 at 08:52AM

Olay’s electromagnetic face wand turns skincare into a mobile game

https://www.engadget.com/2019/01/08/olay-facenavi-smart-wand-beauty-cream-electromagnetic/



Cherlynn Low / Engadget

Olay’s FaceNavi Smart Wand isn’t a magic solution for perfect skin, but it is pretty fun. The Smart Wand is a handheld device that uses electromagnetic pulses to transform a singular Olay face cream into a multi-use product, able to treat puffiness, wrinkles, sagging, discoloration, uneven skin and other conditions in one fell swoop.

Gallery: Olay’s FaceNavi Smart Wand hands-on | 13 Photos

In an ideal situation, I’d use the Smart Wand at home as part of my bedtime skincare routine, after washing away my makeup. However, that wasn’t about to happen in the middle of a crowded convention hall at CES, so I tested the wand without the washing or the cream.

The process starts on the Olay Skin Advisor mobile app — enter your name and birth year, and snap a selfie. The app asks if you have any areas you’re particularly concerned about (dark circles, sagging skin, wrinkles, etc.), and then tells you how old your skin looks. I’m 30, and I’m fairly pleased to report the app read my skin as 28 (with makeup on, that is).

The app itself uses machine learning to read each face and pop out an accurate reading of age and problem areas. An Olay representative assured me the system doesn’t lowball or highball estimated ages to give users a shot of endorphins or send them on a shame-induced spending spree. It’s all in the algorithm.

FaceNavi Smart Wand

The Skin Advisor app then opens up in selfie mode, but this time with AR stickers. Spots of color appear on your face, indicating specific issues. Here’s where the cream and the wand come in: Slather your face in the associated Olay cream, pair the wand with your phone, and get to work. Pressing the metal slope of the wand to your face makes it vibrate slightly; as you swipe it over different color zones, the electromagnetic pulses change, making the cream react with your skin in different ways. On-screen, the color blocks fade as the wand hits them, little stars shooting out to indicate a job well done.

Olay’s goal with the Smart Wand is to replace a half dozen creams, balms, scrubs and sprays that some people use at the end of the day. I’m not sure if it actually accomplishes this goal, but using the wand is satisfying, and its subtle vibrations are soothing. In the worst case scenario, the FaceNavi Smart Wand is fairly calming and indulgent, if not downright fun, to use.

Follow all the latest news from CES 2019 here!

via Engadget http://www.engadget.com

January 8, 2019 at 06:54AM

Toyota is developing fighter-jet-inspired safety tech — and plans to share it

https://www.autoblog.com/2019/01/08/toyota-guardian-chauffeur-safety-tech-ces/


Toyota Research Institute had a breakthrough last year in its pursuit to make driving safer. It was so profound that Toyota wants to open it up to other automakers.

The inspiration was modern-day fighter jets, which use a low-level flight control system to translate the intent of the pilot and keep the aircraft stable and tucked neatly inside a specific safety envelope. TRI calls it blended envelope control, an approach that lets its “Guardian” driver assist system combine and coordinate the skills of the human driver and the vehicle they’re driving.

TRI CEO Gill Pratt revealed Monday during CES 2019 the research arm’s progress, an explanation of its approach, and most important, its intent to share its Guardian driver assist with other automakers. TRI is calling it “Guardian for all.”

To be clear, Toyota Guardian or the “Guardian for all,” system isn’t in production cars, nor will it be for some time. Pratt said it would be shared “in the 2020s,” but he isn’t even entirely sure how it would be delivered to the rest of the industry. In a roundtable discussion with reporters, Pratt said he wasn’t sure if they would license the software, or a combination of hardware and software, to automakers. He only noted that Toyota has the desire and intent to open it up to the rest of the automotive industry.

TRI, and Toyota as a result, have taken a dual approach to autonomy that it calls “Guardian” and the more fully autonomous “Chauffeur.” The automaker intends to eventually develop and deploy fully autonomous cars to serve an aging population, the disabled, or whomever might need a robotaxi. But as Pratt noted Monday, there is still much to be done before these types of vehicles will be on the road in any meaningful way.

In the meantime, Pratt says “we have a moral obligation to apply automated vehicle technology to save as many lives as possible as soon as possible.”

That’s where the other part of that dual approach called Guardian comes in. Guardian is technology that operates in the background and steps in when needed. The driver is always driving, but Guardian is watching, sensing and anticipating problems.

Toyota Guardian is designed to amplify human control of the vehicle, not replace it, the company said. TRI showed a video during its CES presentation that of a three-car accident that included one of its self-driving research vehicles being driven in manual mode. The vehicle’s sensors were all on and capturing data, however.

TRI contends that this blended envelope approach of Guardian would have anticipated or identified the pending incident and employed a corrective responsive in coordination with driver input. In this specific case, TRI’s modeling and testing determined that the system would have prompted the vehicle to accelerate out of the way to avoid the accident altogether.

By Kirsten Korosec for TechCrunch

Related Video:

via Autoblog http://bit.ly/1afPJWx

January 8, 2019 at 09:07AM