Adobe Lightroom mobile now captures RAW images in HDR mode

If you enjoy capturing high dynamic range (HDR) images with your phone, Adobe just added a new feature to Lightroom mobile that might come in handy. Starting today on both Android and iOS versions of the app, you can capture those HDR scenes as RAW files. The software automatically scans your subject to determine the ideal exposure range before snapping three photos in Adobe’s DNG RAW format. Lightroom mobile will then employ algorithms to do all the aligning, merging, tone mapping and more to build the final 32-bit RAW image.

Adobe says the tech at work in Lightroom mobile is the same quality as what you’d encounter when using Adobe Camera Raw and Lightroom on the desktop. HDR photography has certainly come along way from the days of manually editing together a few photos taken at different exposures to produced the desired effect. The company isn’t the first to offer an HDR tool on a mobile device, but it does offer the convenience of being able to sync those RAW snapshots across devices if you’re a Creative Cloud subscriber.

Unfortunately, there are some device restrictions on the new RAW HDR capture tool. On iOS, you’ll need to have an iPhone 7/7 Plus, iPhone 6s/6S Plus, iPhone SE or iPad Pro 9.7. Those are the Apple mobile devices capable of capturing DNG photos. For Android users, the update only supports Samsung Galaxy S7/S7 Edge, Google Pixel and Pixel XL. Adobe says the reason for this is that it needed to ensure stability and high quality output from those algorithms. Galaxy S7/S7 Edge and Pixel handsets have the processing power under the hood to make that happen. The company is working on adding more devices to the fold "as soon as possible."

In terms of other updates to Lightroom mobile, iOS users can now export original files imported through Lightroom mobile and Lightroom on the web. Yes, that includes those DNG RAW images. You can also now use swipe gestures to rate and review photos and there’s a new Notification Center widget that offers quick access to in-app camera. On Android, Lightroom mobile’s linear and radial selection tools that debuted on the iOS version last year are now available.

Source: Adobe

from Engadget http://ift.tt/2mesUPt
via IFTTT

Nintendo Switch controllers can steer games on your computer

Ever since gamers discovered that the Nintendo Switch’s Pro Controller works with computers, there’s been a lingering question: what about the Joy-Cons you get with the system itself? Yes, thankfully. Both Nintendo Actu and Sam Williams have verified that the peripherals work as Bluetooth controllers on Macs and Windows PCs so long as you use an app that binds buttons to mouse and keyboard controls. They should work with Android, too, although Nintendo Actu warns that it saw serious lag — your experience may vary depending on the mobile device you’re using.

This will only be of limited use given that you’re only getting a relatively basic gamepad with each Joy-Con, and you may have to change your configurations with each game. With that said, it’s still a treat for Switch owners who like to play on their PCs and would rather not buy another gamepad unless it’s absolutely necessary.

Via: The Verge, TabTimes

Source: Nintendo Actu (Twitter), Sam Williams (Twitter)

from Engadget http://ift.tt/2mxq1Kf
via IFTTT

Facebook now flags fake news

After taking heat for months in the run up to the presidential elections, Facebook has been cracking down on fake news spreading through its social network. The company recently began using third-party fact-checkers and gave its users the ability to manually report fake news posts. Late last week, the company announced that it will soon include "Disputed" labels for these false reports as well.

Facebook originally promised to do this back in December (along with the fact checkers, curated articles and manual flagging). Under this system, bogus posts from disreputable sites will still show up in your timeline, but they’ll be accompanied by a small warning banner. These banners are applied after a lengthy vetting process. First, the fake post either has to be flagged by a certain number of users or the company’s automated software. The post is then sent to a fact-checking website like Snopes or Politifact where it is reviewed. If two or more fact-checkers flag it again, Facebook will apply the banner.

This process is time-intensive if nothing else. In a recent case reported by Gizmodo, a fake news post from a satirical entertainment website which suggested that it was Trump’s own Android phone that’s responsible for the recent spate of leaks remained unlabeled for nearly five days after being initially posted.

And even when accurately labeled, noting that something is "disputed" rather than "false" does little to change the minds of people who already reject reporting by mainstream news sources in favor of fringe conspiracy sites like Infowars or Breitbart. So, we’ll have to wait and see if this new labelling scheme makes a difference in the tenor of discourse on Facebook’s network or whether it will be another bust like the site’s crackdown on private gun sales.

Via: Recode

Source: Gizmodo

from Engadget http://ift.tt/2lOZHXz
via IFTTT

MIT finds an easy way to control robots with your brain

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wanted robots to be a more natural extension of our bodies. See, you’d usually have to issue vocal or very specific mental commands to control machines. But the method the CSAIL team developed works simply by reading your brain and detecting if you’ve noticed an error as the robot performs its tasks.

You’d have to wear an EEG cap for the technique to work, since CSAIL’s system needs to be able to read and record your brain activity. The machine-learning algorithms it created then classifies brain waves within 10 to 30 milliseconds, focusing on detecting "error-related potentials" or ErrPs. These are signals your brain generates when you spot a mistake. If you disagree with a robot’s decision to, say, place a can of paint in a basket marked "wire," the system picks up on the ErrPs in your thoughts to correct the machine’s course of action.

CSAIL Director Daniela Rus explains:

"As you watch the robot, all you have to do is mentally agree or disagree with what it is doing. You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around."

The team can also continue enhancing the system until it’s able to handle more complex multiple-choice tasks, since ErrPs get stronger the bigger error is. Rus and her team believe the method would give us a greater ability to "supervise factory robots, driverless cars and other technologies we haven’t even invented yet." To test their method, the scientists used a machine with two hands and a tablet face named "Baxter" from Rethink Robotics. You can watch them demo their system in the video below:

Source: MIT CSAIL (1), (2)

from Engadget http://ift.tt/2lWc5WQ
via IFTTT

The tech that makes MMO development easy for indies

SpatialOS is the technical foundation that makes massive, persistent, online world-building possible, even for small video game studios. Think of large, mainstream games like Destiny or Elder Scrolls Online: These are huge universes that support thousands of players at a single time. It typically takes millions of dollars and hundreds of people multiple years to make one of these games — let alone support it post-launch — which is one reason it’s notoriously difficult to secure funding for the development of massively multiplayer online games.

However, SpatialOS puts a spin on this standard. Improbable’s computational platform offers cloud-based server and engine support for MMO games, allowing developers to easily create and host online, multiplayer experiences with persistent features. SpatialOS first made a splash at GDC 2015, when it promised to power MMO games with a swarm-like system of servers that switch on as they’re needed in locations around the world.

Since then, Improbable has secured a deal with Google and launched SpatialOS in alpha. As a testament to the platform’s staying power, development on one of the first titles to use SpatialOS, Worlds Adrift, is still chugging along nicely.

Worlds Adrift comes from Bossa Studios, the home of Surgeon Simulator, I am Bread and a handful of other ridiculous, popular games. Worlds Adrift is bigger than anything in Bossa’s repertoire: It’s a gigantic sandbox-style experience that places players in a shared universe filled with unique floating islands, flying airships and Spider-Man-like grappling hooks.

Worlds Adrift allows players to explore an ecosystem spanning hundreds of kilometers and thousands of individual islands. The islands are a game on their own — the Worlds Adrift Island Creator hit Steam in April, allowing any player to dive into the developer toolbox and design their own unique landscapes. Thousands of player-created islands are live in Worlds Adrift right now, with more incoming every day.

For Bossa, this custom approach to island design replaces procedural generation, a more common development practice that uses algorithms to create varied, yet limited, landscapes.

"We’ve now actually gotten to the point where the entire world is all hand-crafted," designer Luke Williams says.

One aspect of Worlds Adrift that sets it apart from other online games is its persistent features. Cut down a tree and it stays down for all players in the game, until someone comes along to use the wood in an airship or fortress. The log doesn’t disappear into the ground or suddenly re-form into a tree again, as would happen in many modern games. The animals and giant bugs in Worlds Adrift are persistent as well — even when no players are in the area, these creatures still carry out lives of their own, flying around the map, aging, mating and dying. Just like they would in reality.

SpatialOS is a promising platform that’s already opening up MMO development for studios of all sizes. Worlds Adrift is just one of the first games to use Improbable’s swarm-like server technology — another is Vanishing Stars — and it certainly won’t be the last.

Click here to catch up on the latest news from GDC 2017!

from Engadget http://ift.tt/2lP9IV0
via IFTTT

A San Francisco startup 3D printed a whole house in 24 hours

San Francisco-based startup Apis Cor built a whole house in a Russian town within 24 hours. It didn’t repair an existing home or use prefabricated parts to make that happen — its technology’s secret lies in 3D printing. The company used a mobile 3D printer to print out the house’s concrete walls, partitions and building envelope. Workers had to manually paint it and install the roofing materials, wiring, hydro-acoustic and thermal insulation, but even those didn’t take them that much time.

The result is a 400-square-foot house that’s around as big as a standard hotel room. It’s no mansion, but it could be a good choice for people who prefer tiny homes. Apis Cor says the whole house set it back $10,134, with the door and windows eating up the largest part of the budget. That sounds about right for a tiny home, though that amount doesn’t seem to including the cost of the land itself.

The company has uploaded a video of the process you can watch below. It even shows what the interior looks like with appliances, including a curved TV that fits the house’s curved wall, provided by Samsung. If Apis Cor does start 3D printing houses, owners can choose any shape they want and even choose to make one larger than this compact abode.

Via: The Daily Dot

Source: Apis Cor

from Engadget http://ift.tt/2mzNjz7
via IFTTT