Google ‘Play Pass’ is a $5 monthly Android app subscription

https://www.engadget.com/2019/08/01/google-play-pass-monthly-android-app-game-subscription/

Google is testing a service called Play Pass that would offer users "hundreds of premium apps and games" for a monthly fee, according to Android Police. The idea of the offering cropped up last year on XDA Developers after users spotted code references in Google Play. It could be similar to Apple’s Arcade, but offer a selection of both apps and games (rather than just games) for $4.99 per month, "with no ads or in-app purchases," according to the screenshots.

The service is fully fleshed out and appears to be in the final stages of testing. Google is promising "puzzle games to premium music apps … [with] access to hundreds of premium apps and games" — though the emphasis appears to be on games. Some of the titles included are the $7.99 Stardew Valley, Marvel Pinball, Risk and others. The test service was originally spotted by a user, but Google confirmed to Android Police that it is indeed real and in testing.

Apple’s Arcade is launching this fall with an impressive slate of developers including LEGO, Annapurna Interactive, Cartoon Network and Sega. Apple is also chipping in development costs and relying on exclusive titles, which doesn’t seem to be the case for Play Pass. In any case, Google could be planning to counter it by launching its service around the same time.

The services make some sense, given the popularity of subscriptions for music and movies. However, it remains to be seen if it’s a good model for apps, and particularly games.

Source: Android Police

via Engadget http://www.engadget.com

August 1, 2019 at 02:48AM

LightSail 2 Becomes First Spacecraft to Change Orbit Using Sunlight

http://blogs.discovermagazine.com/d-brief/?p=37372

LightSail 2 took this picture of itself, with Earth in the background, while deploying its sail on July 23. (Credit: The Planetary Society)
The Planetary Society, a non-profit organization focused on space exploration, has successfully transferred their LightSail 2 spacecraft from one orbit to another using only the power of sunlight, a first. The recent success not only proves the effectiveness of solar sailing technology, but also opens up a new, more cost-effective way to propel small sp

via Discover Main Feed https://ift.tt/1dqgCKa

July 31, 2019 at 05:14PM

One chip to rule them all: It natively runs all types of AI software

https://arstechnica.com/?p=1544141

Skynet light? The Tianjin-controlled bike stalks one of its creators.
Enlarge /

Skynet light? The Tianjin-controlled bike stalks one of its creators.

We tend to think of AI as a monolithic entity, but it has actually developed along multiple branches. One of the main branches involves performing traditional calculations but feeding the results into another layer that takes input from multiple calculations and weighs them before performing its calculations and forwarding those on. Another branch involves mimicking the behavior of traditional neurons: many small units communicating in bursts of activity called spikes, and keeping track of the history of past activity.

Each of these, in turn, has different branches based on the structure of its layers and communications networks, types of calculations performed, and so on. Rather than being able to act in a manner we would recognize as intelligent, many of these are very good at specialized problems, like pattern recognition or playing poker. And processors that are meant to accelerate the performance of the software can typically only improve a subset of them.

That last division may have come to an end with the development of Tianjic by a large team of researchers primarily based in China. Tianjic is engineered so that its individual processing units can switch from spiking communications back to binary and perform a large range of calculations, in almost all cases faster and more efficiently than a GPU can. To demonstrate the chip’s abilities, the researchers threw together a self-driving bicycle that ran three different AI algorithms on a single chip simultaneously.

Divided in two

While there are many types of AI software, the key division identified by the researchers is between what can be termed layered calculations and spiking communications. The former (which includes things like convolutional neural networks and deep-learning algorithms) use the layers of calculating units, which feed the results of their calculations into the next layer using standard binary data. Each of these units has to keep track of which other units it communicates with and how much weight to give each of its inputs.

On the other side of the divide are approaches inspired more directly by biology. These communicate in analog “spikes” of activity, rather than data. Individual units have to keep track of not only their present state, but their past history. That’s because their probability of sending a spike depends on how often they’ve received spikes in the past. They’re also arranged in large networks, but they don’t necessarily have a clean layered structure or perform the same sort of detailed computations within any unit.

Both of these approaches have benefitted from dedicated hardware, which tends to be at least as good as implementing the software on GPUs and far more energy-efficient. (One example of this is IBM’s TrueNorth processor.) But the vast difference in communications and calculations between the classes has meant that a processor is only good for one or the other type.

That’s what the Tianjic team has changed with what it’s calling the FCore architecture. FCore is designed so that the two different classes of AI can either be represented by a common underlying compute architecture or easily reconfigured on the fly to handle one or the other.

To enable communications among its compute units, FCore uses the native language of traditional neural networks: binary. But FCore is also able to output spikes in a binary format, allowing it to communicate in terms that a neuron-based algorithm can understand. Local memory at each processing unit can be used either for tracking the history of spikes or as a buffer for input and output data. Some of the calculation hardware needed for neural networks is shut down and bypassed when in artificial neuron mode.

In the chip

With these and a few additional features implemented, each individual compute unit in an FCore can be switched in between the two modes, performing either type of calculation and communication as needed. More critically, a single unit can be set into a sort of hybrid mode. That means taking input from one type of AI algorithm but formatting its output so that it’s understood by another—reading spikes and outputting data, or the opposite. That also means any unit on the chip can act as a translator between two types of algorithms, allowing them to communicate with each other when they’re run on the same chip.

The FCore architecture was also designed to scale. The map of connections among its compute units is held in a bit of memory that’s separate from the compute units themselves, and it’s made large enough to allow connections to be made external to an individual chip. Thus, a single neural network could potentially be spread across multiple cores in a processor or even multiple processors.

In fact, the Tianjic chip is made up of multiple FCores (156 of them) arranged in a 2D mesh. In total, there are about 40,000 individual compute units on the chip, which implies an individual FCore has 256 of them. It’s fabricated on a 28 nanometer process, which is more than double the cutting-edge process used by desktop and mobile chipmakers. Despite that, it can shift over 600GB/second internally and perform nearly 1.3 Tera-ops per second when run at 300MHz.

Despite the low clockspeed, the Tianjic put up some impressive numbers when run against the same algorithms implemented on an NVIDIA Titan-Xp. Performance ranged from 1.6 times to 100 times, depending on the algorithm. And, when energy use was considered, the performance per Watt was almost comical, ranging from 12x all the way up to over 10,000x. Other dedicated AI processors have had strong performance-per-Watt, but they haven’t been able to run all the different types of algorithms demonstrated here.

Like riding a… well, you know

On its own, this would have been an interesting paper. But the research team went beyond by showing that Tianjic’s abilities could be put to use even in its experimental form. “To demonstrate the utility of building a brain-like cross-paradigm system,” the researchers write, “we designed an unmanned bicycle experiment by deploying multiple specialized networks in parallel within one Tianjic chip.”

The bike did object detection via a convolutional neural network, and a continuous attractor neural network provided target tracking to allow the bike to follow a researcher around. Meanwhile, a spiking neural network allowed the bike to follow voice commands. Something called a multilayer perceptron tracked the bike’s balance. And all of these inputs were coordinated by a neural state machine based on a spiking neural network.

Somewhere on that bicycle is a Tianjic chip.

And it worked. While the bike wasn’t self-driving in the sense that it was ready to take someone through the bike lanes of a major city, it was certainly good enough to be a researcher’s faithful companion during a walk around a test track that included obstacles.

Overall, this is an impressive bit of work. Either the processor alone or the automated bicycle would have made a solid paper on its own. And the idea of getting a single chip to natively host two radically different software architectures was a bold one.

But there is one caution worth pointing out, in that the researchers posit this as a route to a general intelligence AI. In a lot of ways, Tianjic does resemble a brain: the brain uses a single architecture (the neuron) to host a variety of different processes that, collectively, make sense of the world and plan actions that respond to it. To an extent, the researchers are right that being able to run and integrate multiple algorithms at once is a path toward something like that.

But this is still not necessarily a route to general intelligence. In our brain, specialized regions—the algorithm equivalent—can perform a collection of poorly defined and only vaguely related activities. And a single task (like deciding where to focus our attention) takes myriad inputs, which range from our recent history to our emotional state to what we’re holding in temporary memory to biases built up through millions of years of evolution. So just being able to run multiple algorithms is still a long way off from anything we’d recognize as intelligence.

Nature, 2019. DOI: 10.1038/s41586-019-1424-8  (About DOIs).

via Ars Technica https://arstechnica.com

July 31, 2019 at 05:15PM

Appleā€™s AirDrop and password sharing features can leak iPhone numbers

https://arstechnica.com/?p=1544215

Apple’s AirDrop and password sharing features can leak iPhone numbers

Valentina Palladino

Apple makes it easy for people to locate lost iPhones, share Wi-Fi passwords, and use AirDrop to send files to other nearby devices. A recently published report demonstrates how snoops can capitalize on these features to scoop up a wealth of potentially sensitive data that in some cases includes phone numbers.

Simply having Bluetooth turned on broadcasts a host of device details, including its name, whether it’s in use, if Wi-Fi is turned on, the OS version it’s running, and information about the battery. More concerning: using AirDrop or Wi-Fi password sharing broadcasts a partial cryptographic hash that can easily be converted into an iPhone’s complete phone number. The information—which in the case of a Mac also includes a static MAC address that can be used as a unique identifier—is sent in Bluetooth Low Energy packets.

The information disclosed may not be a big deal in many settings, such as work places where everyone knows everyone anyway. The exposure may be creepier in public places, such as a subway, a bar, or a department store, where anyone with some low-cost hardware and a little know-how can collect the details of all Apple devices that have BLE turned on. The data could also be a boon to companies that track customers as they move through retail outlets.

As noted above, in the event someone is using AirDrop to share a file or image, they’re broadcasting a partial SHA256 hash of their phone number. In the event Wi-Fi password sharing is in use, the device is sending partial SHA256 hashes of its phone number, the user’s email address, and the user’s Apple ID. While only the first three bytes of the hash are broadcast, researchers with security firm Hexway (which published the research) say those bytes provide enough information to recover the full phone number.

Below is a video of an attack:

Apple AirDrop mobile phone catcher.

Hexway’s report includes proof-of-concept software that demonstrates the information broadcast. Errata Security CEO Rob Graham installed the proof-of-concept on a laptop that was equipped with a wireless packet sniffer dongle, and within a minute or two he captured details of more than a dozen iPhones and Apple Watches that were within radio range of the bar where he was working. The highlighted device in the middle of the picture below is his iPhone.

Rob Graham

“It’s not too bad, but it’s still kind of creepy that people can get the status information, and getting the phone number is bad,” he said. It’s not likely, he added, that Apple can prevent phone numbers and other information from leaking, since they’re required—in some form, anyway—for devices to seamlessly connect with other devices a user trusts.

The MAC addresses shown in the image above aren’t the actual device numbers, but rather temporary MAC addresses that rotate regularly. But Graham said unlike iPhone and Apple Watch addresses, MAC addresses for Macintosh computers aren’t obfuscated this way. By broadcasting only partial hashes of phone numbers, email addresses, and AppleID, Apple is clearly making an effort to make data collection hard. But the reality of rainbow tables, automated word or number lists, and lightning-fast hardware means it’s often trivial to crack those hashes.

“This is the classic trade-off that companies like Apple try to make when balancing ease of use vs privacy/security,” independent privacy and security researcher Ashkan Soltani told Ars. “In general, automatic discovery protocols often require the exchange of personal information in order to make them work—and as such—can reveal things that could be considered sensitive. Most security and privacy minded folks I know disable automatic discovery protocols like AirDrop, etc just out of principle.”

via Ars Technica https://arstechnica.com

August 1, 2019 at 06:14AM