How special relativity can help AI predict the future

https://www.technologyreview.com/2020/08/28/1007770/special-relativity-light-cones-ai-predict-future-causality-medicine/

Nobody knows what will happen in the future, but some guesses are a lot better than others. A kicked football will not reverse in midair and return to the kicker’s foot. A half-eaten cheeseburger will not become whole again. A broken arm will not heal overnight.

By drawing on a fundamental description of cause and effect found in Einstein’s theory of special relativity, researchers from Imperial College London have come up with a way to help AIs make better guesses too.

The world progresses step by step, every instant emerging from those that precede it. We can make good guesses about what happens next because we have strong intuitions about cause and effect, honed by observing how the world works from the moment we are born and processing those observations with brains hardwired by millions of years of evolution.

Computers, however, find causal reasoning hard. Machine-learning models excel at spotting correlations but are hard pressed to explain why one event should follow another. That’s a problem, because without a sense of cause and effect, predictions can be wildly off. Why shouldn’t a football reverse in flight? 

This is a particular concern with AI-powered diagnosis. Diseases are often correlated with multiple symptoms. For example, people with type 2 diabetes are often overweight and have shortness of breath. But the shortness of breath is not caused by the diabetes, and treating a patient with insulin will not help with that symptom. 

The AI community is realizing how important causal reasoning could be for machine learning and are scrambling to find ways to bolt it on.

Researchers have tried various ways to help computers predict what might happen next. Existing approaches train a machine-learning model frame by frame to spot patterns in sequences of actions. Show the AI a few frames of a train pulling out of a station and then ask it to generate the next few frames in the sequence, for example.

AIs can do a good job of predicting a few frames into the future, but the accuracy falls off sharply after five or 10 frames, says Athanasios Vlontzos at Imperial College London. Because the AI uses preceding frames to generate the next one in the sequence, small mistakes made early on—a few glitchy pixels, say—get compounded into larger errors as the sequence progresses.

Vlontzos and his colleagues wanted to try a different approach. Instead of getting an AI to learn to predict a specific sequence of future frames by watching millions of video clips, they allowed it to generate a whole range of frames that were roughly similar to the preceding ones and then pick those that were most likely to come next. The AI can make guesses about the future without having to learn anything about the progression of time, says Vlontzos.

To do this, the team developed an algorithm inspired by light cones, a mathematical description of the boundaries of cause and effect in spacetime, which was first proposed in Einstein’s theory of special relativity and later refined by his former professor Hermann Minkowski. Light cones emerge in physics because the speed of light is constant. They show the expanding limits of a ray of light—and everything else—as it emanates from an initial event, such as an explosion.

Take a sheet of paper and mark an event on it with a dot. Now draw a circle with that event at the center. The distance between the dot and the edge of the circle is the distance light has traveled in a period of time—say, one second. Because nothing, not even information, can travel faster than light, the edge of this circle is a hard boundary on the causal influence of the original event. In principle, anything inside the circle could have been affected by the event; anything outside could not.

After two seconds, light has traveled twice the distance and the circle’s size has doubled: there are now many more possible futures for that original event. Picture these ever larger circles rising second by second out of the sheet of paper, and you have an upside-down cone with the original event at its tip. This is a light cone. A mirror image of the cone can also extend backwards, behind the sheet of paper; it will contain all possible pasts that could have led to the original event.

Vlontzos and his colleagues used this concept to constrain the future frames an AI could pick. They tested the idea on two data sets: Moving MNIST, which consists of short video clips of handwritten digits moving around on a screen, and the KTH human action series, which contains clips of people walking or waving their arms. In both cases, they trained the AI to generate frames that looked similar to those in the data set. But importantly the frames in the training data set were not shown in sequence, and the algorithm was not learning how to complete a series.

They then asked the AI to pick which of the new frames were more likely to follow another. To do this, the AI grouped generated frames by similarity and then used the light-cone algorithm to draw a boundary around those that could be causally related to the given frame. Despite not being trained to continue a sequence, the AI could still make good guesses about which frames came next. If you give the AI a frame in which a short-haired person wearing a shirt is walking, then the AI will reject frames that show a person with long hair or no shirt, says Vlontzos. The work is in the final stages of review at NeurIPS, a major machine-learning conference 

An advantage of the approach is that it should work with different types of machine learning, as long as the model can generate new frames that are similar to those in the training set. It could also be used to improve the accuracy of existing AIs trained on video sequences.

To test the approach, the team had the cones expand at a fixed rate. But in practice, this rate will vary. A ball on a football field will have more possible future positions than a ball traveling along rails, for example. This means you would need a cone that expanded at a faster rate for the football.

Working out these speeds involves getting deep into thermodynamics, which isn’t practical. For now, the team plans to set the diameter of the cones by hand. But by watching video of a football game, say, the AI could learn how much and how fast objects moved around, which would enable it to set the diameter of the cone itself. An AI could also learn on the fly, observing how fast a real system changed and adjusting cone size to match it.

Predicting the future is important for many applications. Autonomous vehicles need to be able to predict whether a child is about to run into the road or whether a wobbling cyclist presents a hazard. Robots that need to interact with physical objects need to be able to predict how those objects will behave when moved around. Predictive systems in general will be more accurate if they can reason about cause and effect rather than just correlation.

But Vlontzos and his colleagues are particularly interested in medicine. An AI could be used to simulate how a patient might respond to a certain treatment—for example, spooling out how that treatment might run its course, step by step. “By creating all these possible outcomes, you can see how a drug will affect a disease,” says Vlontzos. The approach could also be used with medical images. Given an MRI scan of a brain, an AI could identify the likely ways a disease could progress.

“It’s very cool to see ideas from fundamental physics being borrowed to do this,” says Ciaran Lee, a researcher at University College London who works on causal inference at Babylon Health, a UK-based digital health-care provider, but wasn’t involved in this research. “A grasp of causality is really important if you want to take actions or decisions in the real world,” he says. It goes to the heart of how things come to be the way they are: “If you ever want to ask the question ‘Why?’ then you need to understand cause and effect.” 

via Technology Review Feed – Tech Review Top Stories https://ift.tt/1XdUwhl

August 28, 2020 at 11:20AM

Beyond Meat starts direct sales of its plant-based patties and sausages

https://www.engadget.com/beyond-meat-web-store-080538312.html

Beyond Meat’s plant-based products have expanded through various restaurants and grocery chains, but now fans of its product can order directly. Its competitor Impossible Foods launched direct-to-consumer sales back in June, as both companies pitch their faux-meat to customers who may be avoiding those same restaurants and grocery stores as much as possible.

Their offerings are fairly similar, with several “family size” bundles that start around $50 — enough to defray the cost of free two-day shipping in the continental US and likely high enough that they don’t directly compete with local grocery stores.

Packaging the products that way makes them less appealing as samplers if you haven’t tried them out yet, although Beyond’s “trial pack” puts one of each of its various products in a box for $50. Still, if you’re either all-in on the post-animal meat lifestyle or just want to mix it up, then you can get hold of either one without stepping outside.

via Engadget http://www.engadget.com

August 28, 2020 at 03:12AM

Facebook says Apple blocked message noting 30 percent App Store fee

https://www.engadget.com/facebook-says-apple-blocked-30-percent-app-store-fee-notice-100501984.html

Apple wouldn’t allow Facebook to tell its users that the tech giant is getting a cut from the sales of paid online events, the social network told Reuters. Earlier this month, Facebook launched a new feature that gives businesses and creators a way to charge for the online events they host on the platform. Since the company rolled it out to help small businesses during the pandemic, it vowed not to collect fees from paid events “for at least the next year.”

Facebook said it also asked Apple to reduce its 30 percent App Store tax or to at least allow it to use Facebook Pay to collect users’ payments directly. By doing the latter, it can absorb the costs for businesses, which will then get 100 percent of the revenue they generate. Apple refused on both counts, prompting the social network to join other developers like Epic Games in putting Apple on blast for its App Store policies.

In an effort to let its users know that not everything they pay will go to the hosts, Facebook added a notice on the purchase screen for iOS clearly stating that Apple will take a 30 percent cut. Apparently, though, Apple blocked it from showing that notice to users, citing an App Store rule that prohibits developers from showing “irrelevant” information. The company told Reuters in a statement:

“Now more than ever, we should have the option to help people understand where money they intend for small businesses actually goes. Unfortunately Apple rejected our transparency notice around their 30% tax but we are still working to make that information available inside the app experience.”

While Google also takes a 30 percent cut from in-app purchases, hosts will still get 100 percent of their revenue from Android purchases where Facebook Pay is available. Reuters notes, however, that the line Facebook added for Android stating that it doesn’t charge fees for purchases isn’t showing up either.

via Engadget http://www.engadget.com

August 28, 2020 at 05:12AM

Watch a Japanese ‘flying car’ take a piloted test flight

https://www.autoblog.com/2020/08/28/skydrive-evtol-piloted-test-flight/


TOKYO — The decades-old dream of zipping around in the sky as simply as driving on highways may be less illusory.

Japan’s SkyDrive Inc., among the myriads of “flying car” projects around the world, has carried out a successful though modest test flight with one person aboard.

In a video shown to reporters on Friday, a contraption that looked like a slick motorcycle with propellers lifted several feet (1-2 meters) off the ground, and hovered in a netted area for four minutes.

Tomohiro Fukuzawa, who heads the SkyDrive effort, said he hopes “the flying car” can be made into a real-life product by 2023, but he acknowledged that making it safe was critical.

“Of the world’s more than 100 flying car projects, only a handful has succeeded with a person on board,” he told The Associated Press.

“I hope many people will want to ride it and feel safe.”

The machine so far can fly for just five to 10 minutes, but if that can become 30 minutes, it will have more potential, including exports to places like China, Fukuzawa said.

Unlike airplanes and helicopters, eVTOL, or “electric vertical takeoff and landing,” vehicles offer quick point-to-point personal travel, at least in principle.

“Many things have to happen. If they cost $10 million, no one is going to buy them. If they fly for 5 minutes, no one is going to buy them. If they fall out of the sky every so often, no one is going to buy them.”

They could do away with the hassle of airports and traffic jams and the cost of hiring pilots, they could fly automatically.

Battery sizes, air traffic control and other infrastructure issues are among the many potential challenges to commercializing them.

“Many things have to happen,” said Sanjiv Singh, professor at the Robotics Institute at Carnegie Mellon University, who co-founded Near Earth Autonomy, near Pittsburgh, which is also working on an eVTOL aircraft.

“If they cost $10 million, no one is going to buy them. If they fly for 5 minutes, no one is going to buy them. If they fall out of the sky every so often, no one is going to buy them,” Singh said in a telephone interview.

The SkyDrive project began humbly as a volunteer project called Cartivator in 2012, with funding by top Japanese companies including automaker Toyota Motor Corp., electronics company Panasonic Corp. and video-game developer Bandai Namco.

A demonstration flight three years ago went poorly. But it has improved and the project recently received another round of funding, of 3.9 billion yen ($37 million), including from the Development Bank of Japan.

The Japanese government is bullish on “the Jetsons” vision, with a “road map” for business services by 2023, and expanded commercial use by the 2030s, stressing its potential for connecting remote areas and providing lifelines in disasters.

Experts compare the buzz over flying cars to the days when the aviation industry got started with the Wright Brothers and the auto industry with the Ford Model T.

Lilium of Germany, Joby Aviation in California and Wisk, a joint venture between Boeing Co. and Kitty Hawk Corp., are also working on eVTOL projects.

Sebastian Thrun, chief executive of Kitty Hawk, said it took time for airplanes, cell phones and self-driving cars to win acceptance.

“But the time between technology and social adoption might be more compressed for eVTOL vehicles,” he said.

via Autoblog https://ift.tt/1afPJWx

August 28, 2020 at 08:37AM