Vertical Forest Towers In China To Produce Over 132 Pounds Of Oxygen Daily

vertical-forest-buildings-1.jpg
These are the Nanjing Towers being constructed in Nanjing, China and scheduled for completion sometime next year. They will be Asia’s first vertical forests (previously: a similar building in Switzerland, designed by the same architect,Stefano Boeri, who clearly loves greenery), feature over 1,000 trees, 2,500 shrubs, and produce over 132 pounds of oxygen per day. For reference, that’s almost enough for me provided I don’t panic and start breathing heavy, which is unlikely. I am a constant panic attack. I’m pretty sure the last time I got a good night’s sleep I was still in utero. Since then life’s pretty much been a constant fear of monsters, ghosts, school, work, relationships, health, and not getting stabbed by a stranger on public transportation. And my parents act like I’m the one that’s crazy for still wanting to sleep in their room at night.
Keep going for two more shots.
vertical-forest-buildings-2.jpg
vertical-forest-buildings-3.jpg
Thanks to Stephanie B, who’s just happy to keep her one plant alive.

from Geekologie – Gadgets, Gizmos, and Awesome http://ift.tt/2knMmYV
via IFTTT

NASA expands emerging space economy with a commercial airlock

The first commercially funded airlock is coming to the International Space Station, and luckily it’s not being built by Weyland-Yutani.

Demand for deployments of CubeSats — miniaturized satellites used for research — and other small payloads from both commercial customers and NASA has increased in recent years. To meet this demand, NASA has accepted a proposal from spaceflight company NanoRacks to build the new commercial airlock. NanoRacks already has two research platforms permanently installed on the U.S. National Laboratory aboard the ISS and markets its services to space programs, biopharmaceutical firms, high schools and universities.

"We want to utilize the space station to expose the commercial sector to new and novel uses of space, ultimately creating a new economy in low-Earth orbit for scientific research, technology development and human and cargo transportation," said Sam Scimemi, director, ISS Division at NASA Headquarters in Washington. "We hope this new airlock will allow a diverse community to experiment and develop opportunities in space for the commercial sector."

NanoRacks is teaming up with Boeing (a long-time collaborator on ISS projects) to build the airlock. Once it’s completed, NASA plans to launch it on a commercial resupply mission and integrate it in 2019. It will be located on a port in the space station’s Tranquility module, which provides additional room for crew members and many of the station’s life support and environmental control systems.

Tranquility is also home to the Bigelow Expandable Activity Module. BEAM is the first expandable habitat tested in space, and early data suggests it’s performing well (after initially deflating like a leaky bicycle tire, that is). NanoRacks’ Airlock is ostensibly part of NASA’s larger efforts to commercialize the International Space Station. Or, you know, to potentially jettison Xenomorphs.

Via: The Verge

Source: NASA

from Engadget http://ift.tt/2kkv1QB
via IFTTT

Google Brain super-resolution image tech makes “zoom, enhance!” real

Google Brain

Google Brain has devised some new software that can create detailed images from tiny, pixelated source images. Google’s software, in short, basically means the "zoom in… now enhance!" TV trope is actually possible.

Google Brain

First, take a look at the image on the right. The left column contains the pixelated 8×8 source images, and the centre column shows the images that Google Brain’s software was able to create from those source images. For comparison, the real images are shown in the right column. As you can see, the software seemingly extracts an amazing amount of detail from just 64 source pixels.

Of course, as we all know, it’s impossible to create more detail than there is in the source image—so how does Google Brain do it? With a clever combination of two neural networks.

The first part, the conditioning network, tries to map the the 8×8 source image against other high resolution images. It downsizes other high-res images to 8×8 and tries to make a match.

  • Left column: source image. Other columns: various outputs produced by the neural networks. There’s a bit of variation. Fourth celebrity from the bottom is particularly scary.


    Google Brain

  • Input: left column. Fourth column: the original image. Other columns: various super-resolution techniques. NN = nearest neighbour (looking for a high-res image in the dataset that closely matches the 8×8 image)


    Google Brain


  • Google Brain

  • Various different super-resolution techniques. The three right-most columns are the Google Brain method.


    Google Brain

The second part, the prior network, uses an implementation of PixelCNN to try and add realistic high-resolution details to the 8×8 source image. Basically, the prior network ingests a large number of high-res real images—of celebrities and bedrooms in this case. Then, when the source image is upscaled, it tries to add new pixels that match what it "knows" about that class of image. For example, if there’s a brown pixel towards the top of the image, the prior network might identify that as an eyebrow: so, when the image is scaled up, it might fill in the gaps with an eyebrow-shaped collection of brown pixels.

To create the final super-resolution image, the outputs from the two neural networks are mashed together. The end result usually contains the plausible addition of new details.

Google Brain’s super-resolution technique was reasonably successful in real-world testing. When human observers were shown a real high-resolution celebrity face vs. the upscaled computed image, they were fooled 10 percent of the time (50 percent would be a perfect score). For the bedroom images, 28 percent of humans were fooled by the computed image. Both scores are much more impressive than normal bicubic scaling, which fooled no human observers.

One of the best videos of all time.

It’s important to note that the computed super-resolution image is not real. The added details—known as "hallucinations" in image processing jargon—are a best guess and nothing more. This raises some intriguing issues, especially in the realms of surveillance and forensics. This technique could take a blurry image of a suspect and add more detail—zoom! enhance!—but it wouldn’t actually be a real photo of the suspect. It might very well help the police find the suspect, though.

Google Brain and DeepMind are two of Alphabet’s deep learning research arms. The former has published some interesting research recently, such as two AIs creating their own cryptographic algorithm; the latter, of course, was thrust into the limelight last year when its AlphaGo AI defeated the world’s best Go players.

DOI: arXiv:1702.00783 (About DOIs).

This post originated on Ars Technica UK

from Ars Technica http://ift.tt/2kIFpTe
via IFTTT