The Rise of AI Is Forcing Google and Microsoft to Become Chipmakers

By now our future is clear: We are to be cared for, entertained, and monetized by artificial intelligence. Existing industries like healthcare and manufacturing will become much more efficient; new ones like augmented reality goggles and robot taxis will become possible.

But as the tech industry busies itself with building out this brave new artificially intelligent, and profit boosting, world, it’s hitting a speed bump: Computers aren’t powerful and efficient enough at the specific kind of math needed. While most attention to the AI boom is understandably focused on the latest exploits of algorithms beating humans at poker or piloting juggernauts, there’s a less obvious scramble going on to build a new breed of computer chip needed to power our AI future.

One datapoint that shows how great that need is: software companies Google and Microsoft have become entangled in the messy task of creating their own chips. They’re being raced by a new crop of startups peddling their own AI-centric silicon—and probably Apple, too. As well as transforming our lives with intelligent machines, the contest could shake up the established chip industry.

Microsoft revealed its AI chip-making project late on Sunday. At a computer vision conference in Hawaii, Harry Shum, who leads Microsoft’s research efforts, showed off a new chip created for the HoloLens augmented reality googles. The chip, which Shum demonstrated tracking hand movements, includes a module custom-designed to efficiently run the deep learning software behind recent strides in speech and image recognition. Microsoft wants you to be able to smoothly reach out and interact with the virtual objects overlaid on your vision and says nothing on the market could run machine learning software efficiently enough for the battery-powered device that sits on your head.

Microsoft’s project comes in the wake of Google’s own deep learning chip, announced in 2016. The TPU, for tensor processing unit, was created to make deep learning more efficient inside the company’s cloud. The company told WIRED earlier this year that it saved the company from building 15 new datacenters as demand for speech recognition soared. In May Google announced it had made a more powerful version of its TPU and that it would be renting out access to the chips to customers of its cloud computing business.

News that Microsoft has built a deep learning processor for Hololens suggests Redmond wouldn’t need to start from scratch to prep its own server chip to compete with Google’s TPUs. Microsoft has spent several years making its cloud more efficient at deep learning using so-called field-programmable gate arrays, a kind of chip that can be reconfigured after it’s manufactured to make a particular piece of software or algorithm run faster. It plans to offer those to cloud customers next year. But when asked recently if Microsoft would make a custom server chip like Google’s, Doug Burger, the technical mastermind behind Microsoft’s roll out of FPGAs, said he wouldn’t rule it out. Pieces of the design and supply chain process used for the HoloLens deep learning chip could be repurposed for a server chip.

Google and Microsoft’s projects are the most visible part of a new AI-chip industry springing up to challenge established semiconductor giants such as Intel and Nvidia. Apple has for several years designed the processors for its mobile devices, and is widely believed to be working on creating a new chip to make future iPhones better at artificial intelligence. Numerous startups are working on deep learning chips of their own, including Groq, founded by ex-Google engineers who worked on the TPU. “Companies like Intel and Nvidia have been trying to keep on selling what they were already selling,” says Linley Gwennap, founder of semiconductor industry analysts the Linley Group. “We’ve seen these leading cloud companies and startups moving more quickly because they can see the need in their own data centers and the wider market.”

Graphics chip maker Nvidia has seen sales and profits soar in recent years because its chips are better suited than conventional processors to training deep learning software. But the company has mostly chosen to modify and extend its existing chip designs rather than making something tightly specialized to deep learning from scratch, Gwennap says.

You can expect the established chip companies to fight back. Intel, the world’s largest chipmaker, bought an AI chip startup called Nervana last summer and is working on a dedicated deep learning chip built on the company’s technology. The company has the most sophisticated and expensive chip manufacturing operation on the planet. But representatives of the large and small upstarts taking on the chip industry say they have critical advantages. One is that they don’t have to make something that fits within an existing ecosystem of chips and software originally developed for something else.

“We’ve got a simpler task because we’re trying to do one thing and can build things from the ground up,” says Nigel Toon, CEO and co-founder of Graphcore, a UK startup working on a chip for artificial intelligence. Last week the company disclosed $30 million of new funding, including funds from Demis Hassabis, the CEO of Google’s DeepMind AI research division. Also in on the funding round: several leaders from OpenAI, the research institute co-founded by Elon Musk.

At the other end of the scale, the big cloud companies can exploit their considerable experience in running and inventing machine learning services and techniques. “One of the things we really benefited from at Google was we could work directly with the application developers in, say, speech recognition and Street View,” says Norm Jouppi, the engineer who leads Google’s TPU project. “When you’re focused on a few customers and working hand in hand with them it really shortens the turnaround time to build something.”

Google and Microsoft built themselves up by inventing software that did new things with chips designed and built by others. As more is staked on AI, the silicon substrate of the tech industry is changing—and so is where it comes from.

from Wired Top Stories http://ift.tt/2gZsP1n
via IFTTT

MIT-developed plugin makes CAD changes ‘instant’

A new computer-aided-design (CAD) plug-in could drastically improve products you use on a daily basis. Researchers from MIT and Columbia University say the tool will allow engineers to develop prototypes in real-time. They claim its ease of use will have an immediate impact on objects with complex designs, such as cars, planes, and robots.

Many of the products you use are developed using computer-aided-design systems. However, the laborious nature of those same systems can also make them a hindrance during the design process. They can prove particularly time-consuming for engineers developing intricate products (like cars), which undergo a range of modifications.

According to its creators, the new InstantCAD plug-in can cut days (and even weeks) from the development period. This is mainly down to its use of a custom algorithm that provides instant feedback on how to improve an item’s design. For example, if you were building a drone, it could tell you how to make it as lightweight as possible while still being able to carry your desired weight.

"From more ergonomic desks to higher-performance cars, this is really about creating better products in less time," said lead researcher Adriana Schulz. "We think this could be a real game changer for automakers and other companies that want to be able to test and improve complex designs in a matter of seconds."

InstantCad forms part of a paper that will be presented at this month’s SIGGRAPH computer graphics conference in Los Angeles. Its authors claim the optmization of tricky CAD systems is critical in a world where 3D printing and robotics are becoming more accessible.

Source: MIT News

from Engadget http://ift.tt/2tGcFzU
via IFTTT

Stanford built a ‘4D’ camera for cars, robots and VR

A team of Stanford scientists have created what could be the perfect "eye" for autonomous vehicles and delivery drones thus far. It’s a 4D camera that can capture nearly 140 degrees of information, allowing it to gather more information than conventional cameras in a single image. The researchers call their design the "first-ever single-lens, wide field of view, light field camera." It relies on light field photography for the additional info to make its results four dimensional. That means it can observe and record the direction and distance of the light hitting the lens and bundle it with the resulting 2D image.

As a result, the team’s robot eye has the ability to refocus images after they’re taken, which is light field photography’s most popular feature. Remember Lytro? That small device can adjust the focus of an image, because it also uses light field imaging tech. The researchers compare the difference between looking through a normal camera and the one they designed to the difference between looking through a peephole and a window:

"A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering’. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess."

Assistant prof. Gordon Wetzstein and postdoctoral scholar Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields.

In the future, various types of robots and machines can take advantage of the camera’s capabilities. A rugged robot can use its light field features to refocus images as it makes its way through the rain. It can improve close-up images for search-and-rescue robots or self-driving cars while navigating small areas. The camera could also be used to capture images for augmented and virtual reality, since all the info it includes in one picture could lead to more seamless renderings.

At the moment, the device is still in its proof-of-concept stage and is a bit too big for actual use. The researchers are aiming to develop a smaller and lighter version that they can test on a robot, but for now, you can see some of its sample snapshots in the video below:

Source: Stanford University, Stanford Computational Imaging Lab

from Engadget http://ift.tt/2vVhUs5
via IFTTT

Samsung Releases Arthritis Drug in First Step Into U.S. Pharmaceuticals

Americans suffering from arthritis can now find relief from an unexpected new player in the pharmaceuticals market.

South Korea’s Samsung conglomerate, best known for its smartphones and televisions, will make available on Monday in the U.S. its lower-price copy of Johnson & Johnson’s blockbuster rheumatoid-arthritis drug Remicade, the second such alternative on the market. The Samsung-developed drug will be marketed to…

from WSJ.com: What’s News US http://ift.tt/2v1P9gP
via IFTTT

Cooool: Video Of A Rocket Launch From Space

rocket-launch-from-space.jpg

This is a video of a Soyuz rocket launch from Kazakhstan’s Baikonur Cosmodrome on July 14, as captured by a satellite at the almost futuristic frame rate of one frame/second. The 11 second video actually consists of two and a half minutes of real-time footage. You know, sometimes I wish I was a rocket ship blasting off for the stars. Sometimes I wish I was an eagle. Other times I wish I was a merman. Most of the time though I just wish I was back in bed asleep.

Keep going for the video.

VIDEO

Thanks to Dougie, who doesn’t believe in outerspace and now I’m not sure I do either.

blog comments powered by Disqus

from Geekologie – Gadgets, Gizmos, and Awesome http://ift.tt/2eIIscF
via IFTTT

NASA moves ahead with plans to build a quiet supersonic jet

NASA’s dreams of a quiet supersonic jet are one step closer to fruition. The agency tells Bloomberg that it’ll start taking bids to build a larger (94-foot) real-world demo version of the aircraft that it tunnel-tested in June, and we now have a clearer sense of how well it’ll perform in real life. The design is expected to reduce noise to no more than 65dBa, which is exceptionally quiet for an aircraft — co-designer Lockheed Martin likens it to the inside of a luxury car. That would make it safe to fly just about anywhere. The Concorde, by contrast, was an assault on your ears at 90dBa and was limited to overseas flights.

The larger prototype will fly as high as 55,000 feet and manage its supersonic speeds despite running on just one of the two engines you see in the F/A-18 Hornet. In practice, it could cut the flight time from New York to Los Angeles in half, to 3 hours.

NASA has already outlined plans to fly the finished aircraft by 2020 (including over populated areas), and already has funding for the first year of its 5-year roadmap. This isn’t the same as a passenger aircraft, alas, but not to worry. The organization plans to hand out the knowledge from its tests to private manufacturers, some of whom (such as Boom Technology) already have plans for supersonic passenger jets. It won’t happen for a long while, but there should be a day when a flight across the country no longer chews up most of your day.

Source: Bloomberg

from Engadget http://ift.tt/2v1wEsZ
via IFTTT

India will ban driverless cars in order to protect jobs

As self-driving cars are being tested everywhere from the US to South Korea, Germany to Australia, reports today make it clear that it won’t be happening in India. The country’s transport and highways minister, Nitin Gadkari told reporters today, "We won’t allow driverless cars in India. I am very clear on this."

The statement wasn’t a reflection of safety concerns. Rather, the minister’s rejection of self-driving vehicles is about the jobs they would take away from drivers in the country. "We won’t allow any technology that takes away jobs. In a country where you have unemployment, you can’t have a technology that ends up taking people’s jobs," said Gadkari. He went on to say that while India was indeed short about 22,000 commercial drivers, the government was working on opening a number of training facilities across the country in order to get 5,000 more professional drivers on the road over the next few years.

However, according to statements made by former Uber CEO Travis Kalanick and Google CEO Sundar Pichai, India wasn’t likely to get autonomous vehicles anytime soon. The haphazard roads and chaotic traffic in parts of the country make it difficult to safely introduce driverless technology onto the roadways. But Indian company Tata Elxsi has been trying to get around those issues by testing self-driving vehicles on a track designed to resemble the roads and traffic of India. Complete with pedestrians, livestock, unsignaled lane merges and lack of signage, the testing track is meant to give driverless cars as real of an experience as possible while still respecting India’s ban of self-driving cars from its roads. How today’s statement from Gadkari will impact Tata Elxsi’s business plans isn’t yet clear.

Via: The Times of India

Source: Hindustan Times

from Engadget http://ift.tt/2eJj86j
via IFTTT