With Arrival, Ted Chiang Becomes Hollywood’s New Philip K. Dick

The new movie Arrival stars Amy Adams as a linguistics professor who gets recruited by the military to help them communicate with mysterious, octopus-like aliens. Critic Andrew Liptak thinks it’s one of the best films of the year.

“It’s not your typical first-contact movie,” Liptak says in Episode 230 of the Geek’s Guide to the Galaxy podcast. “You’ve got War of the Worlds, Battle: Los Angeles, Edge of Tomorrow, and this year we had [the new] Independence Day. These are all stories where aliens come and blow shit up, and this is not that type of film.”

Writer Carol Pinchefsky agrees that Arrival is exceptional. She gives much of the credit for that to its source material, the novella “Story of Your Life” by science fiction author Ted Chiang.

“I loved ‘Story of Your Life,’” she says. “I thought it was just such a poignant, heartbreaking story, and I wanted to see it brought to film, and it was translated so beautifully.”

In the science fiction world, Ted Chiang is one of the most widely admired authors working today. His writing process is famously slow and painstaking, resulting in an output of less than one piece of short fiction per year, but everything he writes is of unusually high quality. Writer and editor Christopher Cevasco is one of Chiang’s many admirers.

“I’ve actually had these moments where, as a writer myself, I’ll suddenly sit back and almost feel myself getting choked up just being so in awe of how he’s managed to accomplish something he’s been able to accomplish in a story,” Cevasco says. “It’ll be something that’s so elegant and artistically done that sometimes I just have to sit back and marvel at it.”

Geek’s Guide to the Galaxy host David Barr Kirtley is thrilled that Hollywood has finally discovered Ted Chiang, and he hopes to see more of Chiang’s stories adapted in the future.

“After this you kind of think he’s got to become the next Philip K. Dick of Hollywood,” Kirtley says.

Listen to our complete interview with Andrew Liptak, Carol Pinchefsky, and Christopher Cevasco in Episode 230 of Geek’s Guide to the Galaxy (above). And check out some highlights from the discussion below.

David Barr Kirtley on fighting aliens:

“This is another Hollywood cliche, that 21st-century human armies could fight interstellar aliens, which is completely ludicrous beyond all imagination. Not only is their technology probably at least a thousand years in advance of ours, but they’re in orbit—all they have to do is take a medium-size asteroid and drop it on the Earth and destroy the whole planet. You have no chance of victory in this sort of situation, and I’m a little bit concerned that no movie that I can think of has ever acknowledged this reality, so that if aliens ever do come, our military leaders are not actually going to understand this reality, because they’ve just seen all these movies and will be like, ‘Well, we beat the aliens in every movie. Let’s go for it.’”

Christopher Cevasco on the politics of Arrival:

“This is a science fiction movie, and so you always expect a little bit of hand-wavy-ness, but the funny thing is that all of the science fictional elements in the movie didn’t bother me at all. They felt, for the most part, as grounded in reality as those sorts of things could be. The stuff that felt hand-wavy to me was the global politics and the diplomacy going on. The stakes kept getting escalated for reasons that were never particularly clear to me. It was like, ‘Oh, this guy’s going to do this, this guy’s going to do that. Pakistan’s going along with Sudan.’ I never quite understood what all the conflict was about. … It didn’t ring true to me, which is weird, because that was the least science fictional element in the film.”

Andrew Liptak on cinematography:

“It was a really gorgeously shot film. There was a heavy emphasis on the camera panning over some very blank surfaces, and it looked very minimal at points, from her house to the landscape, even just the texture on the alien ship. I’m really attracted to that type of cinematography, where you really just take it very slowly and you have these very wide shots with the people right in the middle. The director also did a lot of shots right on her face—she was centered right in the middle of the shot as things were going on around her. There’s this one gorgeous shot where she sort of looks to the side, and in front of her is the alien sentence, in an arc around her head. I thought it was gorgeous.”

David Barr Kirtley on Ted Chiang’s story “Liking What You See: A Documentary”:

“There’s apparently a real condition you can have, as a result of brain damage or something like that, where you can see people’s faces and recognize them, but you’ve lost the ability to gauge whether they’re beautiful or not. So in this story, there’s a community of people who have all done this to themselves through technology, intentionally, and have done it to their kids as well. So these kids have grown up not being able to judge who among them is attractive. And then the community breaks apart, and they have to go out into the world and deal with what it means to be judged for your attractiveness for the first time.”

Go Back to Top. Skip To: Start of Article.

from Wired Top Stories http://ift.tt/2fcNnR9
via IFTTT

Physicists Uncover Strange Numbers in Particle Collisions

At the Large Hadron Collider in Geneva, physicists shoot protons around a 17-mile track and smash them together at nearly the speed of light. It’s one of the most finely tuned scientific experiments in the world, but when trying to make sense of the quantum debris, physicists begin with a strikingly simple tool called a Feynman diagram that’s not that different from how a child would depict the situation.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


Feynman diagrams were devised by Richard Feynman in the 1940s. They feature lines representing elementary particles that converge at a vertex (which represents a collision) and then diverge from there to represent the pieces that emerge from the crash. Those lines either shoot off alone or converge again. The chain of collisions can be as long as a physicist dares to consider.

To that schematic physicists then add numbers, for the mass, momentum and direction of the particles involved. Then they begin a laborious accounting procedure—integrate these, add that, square this. The final result is a single number, called a Feynman probability, which quantifies the chance that the particle collision will play out as sketched.

“In some sense Feynman invented this diagram to encode complicated math as a bookkeeping device,” said Sergei Gukov, a theoretical physicist and mathematician at the California Institute of Technology.

Feynman diagrams have served physics well over the years, but they have limitations. One is strictly procedural. Physicists are pursuing increasingly high-energy particle collisions that require greater precision of measurement—and as the precision goes up, so does the intricacy of the Feynman diagrams that need to be calculated to generate a prediction.

The second limitation is of a more fundamental nature. Feynman diagrams are based on the assumption that the more potential collisions and sub-collisions physicists account for, the more accurate their numerical predictions will be. This process of calculation, known as perturbative expansion, works very well for particle collisions of electrons, where the weak and electromagnetic forces dominate. It works less well for high-energy collisions, like collisions between protons, where the strong nuclear force prevails. In these cases, accounting for a wider range of collisions—by drawing ever more elaborate Feynman diagrams—can actually lead physicists astray.

“We know for a fact that at some point it begins to diverge” from real-world physics, said Francis Brown, a mathematician at the University of Oxford. “What’s not known is how to estimate at what point one should stop calculating diagrams.”

Yet there is reason for optimism. Over the last decade physicists and mathematicians have been exploring a surprising correspondence that has the potential to breathe new life into the venerable Feynman diagram and generate far-reaching insights in both fields. It has to do with the strange fact that the values calculated from Feynman diagrams seem to exactly match some of the most important numbers that crop up in a branch of mathematics known as algebraic geometry. These values are called “periods of motives,” and there’s no obvious reason why the same numbers should appear in both settings. Indeed, it’s as strange as it would be if every time you measured a cup of rice, you observed that the number of grains was prime.

“There is a connection from nature to algebraic geometry and periods, and with hindsight, it’s not a coincidence,” said Dirk Kreimer, a physicist at Humboldt University in Berlin.

Now mathematicians and physicists are working together to unravel the coincidence. For mathematicians, physics has called to their attention a special class of numbers that they’d like to understand: Is there a hidden structure to these periods that occur in physics? What special properties might this class of numbers have? For physicists, the reward of that kind of mathematical understanding would be a new degree of foresight when it comes to anticipating how events will play out in the messy quantum world.

A Recurring Theme

Today periods are one of the most abstract subjects of mathematics, but they started out as a more concrete concern. In the early 17th century scientists such as Galileo Galilei were interested in figuring out how to calculate the length of time a pendulum takes to complete a swing. They realized that the calculation boiled down to taking the integral—a kind of infinite sum—of a function that combined information about the pendulum’s length and angle of release. Around the same time, Johannes Kepler used similar calculations to establish the time that a planet takes to travel around the sun. They called these measurements “periods,” and established them as one of the most important measurements that can be made about motion.

A POTENTIAL SHORTCUT TO PREDICTIONS
Lucy Reading-Ikkanda/Quanta Magazine

Over the course of the 18th and 19th centuries, mathematicians became interested in studying periods generally—not just as they related to pendulums or planets, but as a class of numbers generated by integrating polynomial functions like x2 + 2x – 6 and 3x3 – 4x2 – 2x + 6. For more than a century, luminaries like Carl Friedrich Gauss and Leonhard Euler explored the universe of periods and found that it contained many features that pointed to some underlying order. In a sense, the field of algebraic geometry—which studies the geometric forms of polynomial equations—developed in the 20th century as a means for pursuing that hidden structure.

This effort advanced rapidly in the 1960s. By that time mathematicians had done what they often do: They translated relatively concrete objects like equations into more abstract ones, which they hoped would allow them to identify relationships that were not initially apparent.

This process first involved looking at the geometric objects (known as algebraic varieties) defined by the solutions to classes of polynomial functions, rather than looking at the functions themselves. Next, mathematicians tried to understand the basic properties of those geometric objects. To do that they developed what are known as cohomology theories—ways of identifying structural aspects of the geometric objects that were the same regardless of the particular polynomial equation used to generate the objects.

By the 1960s, cohomology theories had proliferated to the point of distraction—singular cohomology, de Rham cohomology, étale cohomology and so on. Everyone, it seemed, had a different view of the most important features of algebraic varieties.

It was in this cluttered landscape that the pioneering mathematician Alexander Grothendieck, who died in 2014, realized that all cohomology theories were different versions of the same thing.

“What Grothendieck observed is that, in the case of an algebraic variety, no matter how you compute these different cohomology theories, you always somehow find the same answer,” Brown said.

That same answer—the unique thing at the center of all these cohomology theories—was what Grothendieck called a “motive.” “In music it means a recurring theme. For Grothendieck a motive was something which is coming again and again in different forms, but it’s really the same,” said Pierre Cartier, a mathematician at the Institute of Advanced Scientific Studies outside Paris and a former colleague of Grothendieck’s.

Motives are in a sense the fundamental building blocks of polynomial equations, in the same way that prime factors are the elemental pieces of larger numbers. Motives also have their own data associated with them. Just as you can break matter into elements and specify characteristics of each element — its atomic number and atomic weight and so forth — mathematicians ascribe essential measurements to a motive. The most important of these measurements are the motive’s periods. And if the period of a motive arising in one system of polynomial equations is the same as the period of a motive arising in a different system, you know the motives are the same.

“Once you know the periods, which are specific numbers, that’s almost the same as knowing the motive itself,” said Minhyong Kim, a mathematician at Oxford.

One direct way to see how the same period can show up in unexpected contexts is with pi, “the most famous example of getting a period,” Cartier said. Pi shows up in many guises in geometry: in the integral of the function that defines the one-dimensional circle, in the integral of the function that defines the two-dimensional circle, and in the integral of the function that defines the sphere. That this same value would recur in such seemingly different-looking integrals was likely mysterious to ancient thinkers. “The modern explanation is that the sphere and the solid circle have the same motive and therefore have to have essentially the same period,” Brown wrote in an email.

Feynman’s Arduous Path

If curious minds long ago wanted to know why values like pi crop up in calculations on the circle and the sphere, today mathematicians and physicists would like to know why those values arise out of a different kind of geometric object: Feynman diagrams.

Feynman diagrams have a basic geometric aspect to them, formed as they are from line segments, rays and vertices. To see how they’re constructed, and why they’re useful in physics, imagine a simple experimental setup in which an electron and a positron collide to produce a muon and an antimuon. To calculate the probability of that result taking place, a physicist would need to know the mass and momentum of each of the incoming particles and also something about the path the particles followed. In quantum mechanics, the path a particle takes can be thought of as the average of all the possible paths it might take. Computing that path becomes a matter of taking an integral, known as a Feynman path integral, over the set of all paths.

Every route a particle collision could follow from beginning to end can be represented by a Feynman diagram, and each diagram has its own associated integral. (The diagram and its integral are one and the same.) To calculate the probability of a specific outcome from a specific set of starting conditions, you consider all possible diagrams that could describe what happens, take each integral, and add those integrals together. That number is the diagram’s amplitude. Physicists then square the magnitude of this number to get the probability.

This procedure is easy to execute for an electron and a positron going in and a muon and an antimuon coming out. But that’s boring physics. The experiments that physicists really care about involve Feynman diagrams with loops. Loops represent situations in which particles emit and then reabsorb additional particles. When an electron collides with a positron, there’s an infinite number of intermediate collisions that can take place before the final muon and antimuon pair emerges. In these intermediate collisions, new particles like photons are created and annihilated before they can be observed. The entering and exiting particles are the same as previously described, but the fact that those unobservable collisions happen can still have subtle effects on the outcome.

“It’s like Tinkertoys. Once you draw a diagram you can connect more lines according to the rules of the theory,” said Flip Tanedo, a physicist at the University of California, Riverside. “You can connect more sticks, more nodes, to make it more complicated.”

By considering loops, physicists increase the precision of their experiments. (Adding a loop is like calculating a value to a greater number of significant digits). But each time they add a loop, the number of Feynman diagrams that need to be considered—and the difficulty of the corresponding integrals—goes up dramatically. For example, a one-loop version of a simple system might require just one diagram. A two-loop version of the same system needs seven diagrams. Three loops demand 72 diagrams. Increase it to five loops, and the calculation requires around 12,000 integrals—a computational load that can literally take years to resolve.

Rather than chugging through so many tedious integrals, physicists would love to gain a sense of the final amplitude just by looking at the structure of a given Feynman diagram—just as mathematicians can associate periods with motives.

“This procedure is so complex and the integrals are so hard, so what we’d like to do is gain insight about the final answer, the final integral or period, just by staring at the graph,” Brown said.

A Surprising Connection

Periods and amplitudes were presented together for the first time in 1994 by Kreimer and David Broadhurst, a physicist at the Open University in England, with a paper following in 1995. The work led mathematicians to speculate that all amplitudes were periods of mixed Tate motives—a special kind of motive named after John Tate, emeritus professor at Harvard University, in which all the periods are multiple values of one of the most influential constructions in number theory, the Riemann zeta function. In the situation with an electron-positron pair going in and a muon-antimuon pair coming out, the main part of the amplitude comes out as six times the Riemann zeta function evaluated at three.

Thrown for a loop
Lucy Reading-Ikkanda/Quanta Magazine

If all amplitudes were multiple zeta values, it would give physicists a well-defined class of numbers to work with. But in 2012 Brown and his collaborator Oliver Schnetz proved that’s not the case. While all the amplitudes physicists come across today may be periods of mixed Tate motives, “there are monsters lurking out there that throw a spanner into the works,” Brown said. Those monsters are “certainly periods, but they’re not the nice and simple periods people had hoped for.”

What physicists and mathematicians do know is that there seems to be a connection between the number of loops in a Feynman diagram and a notion in mathematics called “weight.” Weight is a number related to the dimension of the space being integrated over: A period integral over a one-dimensional space can have a weight of 0, 1 or 2; a period integral over a two-dimensional space can have weight up to 4, and so on. Weight can also be used to sort periods into different types: All periods of weight 0 are conjectured to be algebraic numbers, which can be the solutions to polynomial equations (this has not been proved); the period of a pendulum always has a weight of 1; pi is a period of weight 2; and the weights of values of the Riemann zeta function are always twice the input (so the zeta function evaluated at 3 has a weight of 6).

This classification of periods by weights carries over to Feynman diagrams, where the number of loops in a diagram is somehow related to the weight of its amplitude. Diagrams with no loops have amplitudes of weight 0; the amplitudes of diagrams with one loop are all periods of mixed Tate motives and have, at most, a weight of 4. For graphs with additional loops, mathematicians suspect the relationship continues, even if they can’t see it yet.

“We go to higher loops and we see periods of a more general type,” Kreimer said. “There mathematicians get really interested because they don’t understand much about motives that are not mixed Tate motives.”

Mathematicians and physicists are currently going back and forth trying to establish the scope of the problem and craft solutions. Mathematicians suggest functions (and their integrals) to physicists that can be used to describe Feynman diagrams. Physicists produce configurations of particle collisions that outstrip the functions mathematicians have to offer. “It’s quite amazing to see how fast they’ve assimilated quite technical mathematical ideas,” Brown said. “We’ve run out of classical numbers and functions to give to physicists.”

Nature’s Groups

Since the development of calculus in the 17th century, numbers arising in the physical world have informed mathematical progress. Such is the case today. The fact that the periods that come from physics are “somehow God-given and come from physical theories means they have a lot of structure and it’s structure a mathematician wouldn’t necessarily think of or try to invent,” said Brown.

Adds Kreimer, “It seems so that the periods which nature wants are a smaller set than the periods mathematics can define, but we cannot define very cleanly what this subset really is.”

Brown is looking to prove that there’s a kind of mathematical group—a Galois group—acting on the set of periods that come from Feynman diagrams. “The answer seems to be yes in every single case that’s ever been computed,” he said, but proof that the relationship holds categorically is still in the distance. “If it were true that there were a group acting on the numbers coming from physics, that means you’re finding a huge class of symmetries,” Brown said. “If that’s true, then the next step is to ask why there’s this big symmetry group and what possible physics meaning could it have.”

Among other things, it would deepen the already provocative relationship between fundamental geometric constructions from two very different contexts: motives, the objects that mathematicians devised 50 years ago to understand the solutions to polynomial equations, and Feynman diagrams, the schematic representation of how particle collisions play out. Every Feynman diagram has a motive attached to it, but what exactly the structure of a motive is saying about the structure of its related diagram remains anyone’s guess.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Go Back to Top. Skip To: Start of Article.

from Wired Top Stories http://ift.tt/2gs5xjI
via IFTTT

Top 10 DIY Projects That Will Teach You A Ton About Tech

DIY isn’t always the easiest way to do something, but it’s usually the most informative and educational one. This week, let’s check out some great DIY tech projects that’ll teach you a ton about the tools you probably use every day—and protect your privacy and give you control over your own data in the process.

10. Build Your Own DIY Amazon Echo

It may not be the most cost-effective option, but building your own Amazon Echo using a Raspberry Pi will not only teach you a lot about how the Echo works, but also how the Raspberry Pi works, and how you can unlock even a part of its overall potential.

We’ve shown you how to do it before, and even shared Amazon’s official guide for doing so, and how to set a wake word for it. The beauty of this project though is that unlike an Echo, which you can buy and just let it work, you make use of a Pi, which can then be repurposed and reused for whatever you want in addition to being an Echo. The whole project is something you can do in an afternoon, and you’ll learn a lot in the process.

9. Add Wi-Fi to Your DSLR Camera

If you have a DSLR, or even a mirrorless camera, you probably love using it but have to remember to transfer the photos or video from your SD card to your computer manually for processing or sharing. You can cut out the middleman by giving your DSLR a Wi-Fi upgrade. All you need is the right SD card and a little setup time.

Even if that model isn’t right for you, there are plenty of others in the guide that work just as well, and in the process will teach you a good bit about backing up your photos and streamlining the editing process while you make sure you never miss a shot. Alternatively, you can always pair your camera with your smartphone instead, both on iOS and over on Android.

8. Build Your Own Private, Syncing Cloud Storage Service

Dropbox is simple and easy, and everyone has an account, but whether you prefer to have complete control over your own data at all times, or you just want to understand how cloud services like Dropbox work, it’s not hard to roll your own Dropbox clone. If you have a web host, you could do it there, or you could roll your own with a Raspberry Pi and learn about the apps that make file sharing possible and about the Pi in one fell swoop.

You’ll just need to expand your knowledge of networking and file sharing a bit, and try out a couple of apps to help get the job done. You could also use Resilio (formerly BitTorrent Sync) to do the job too, and there are tons of other options. The important thing is that you’ll learn a lot in the process, do your own thing, and protect your own data.

7. Roll Your Own Image Hosting Service

Services like Google Photos and even Facebook are certainly the most popular places to host your images, but like everything else, they put your precious memories beyond your true control, and put you at the mercy of someone else’s terms of service, privacy policy, and so on. So consider rolling your own! We have a bunch of great tools to help you do it, including the Gallery Project, JAlbum, and more—assuming you want to still be able to share those photos on the web when you want.

If you’d rather control your own galleries and keep them organized on your own, you could always auto-sync with the app or tool of your choice and then manage locally only, but keeping a solid backup offside is a good idea just in case you lose your own data. Even if that backup is something you also own, set up yourself, and own personally.

6. Build Your Own Streaming Dash Cam

Dash cameras aren’t the near-necessities here in North America as they may be in other countries, but they’re fun to watch and can capture some amazing moments. Best of all, they’re not difficult to make on your own, and you can learn a ton about electronics, mobile recording, and more in the process. You could just grab an old smartphone you’ve upgraded from to do the job.

If you’re willing to put a Raspberry Pi to good use, grab one of those and give your dash cam live streaming capabilities. Then, when you’re ready, you could make your setup even more elaborate with a pair of cameras, GPS capabilities, and even status LEDs. Start small and work your way up.

5. Roll Your Own Home Theater PC

Building your own home theater PC is one of the ultimate tech projects you can tackle. You combine an understanding of software, storage, online and streaming media, downloadable music and movies, and the sources for all of that stuff right along with the nuts and bolts of building a system that’ll fit in your living room or connect to your TV and serve it all up when you want to sit down and watch TV.

Sure, HTPCs have fallen by the wayside now that tiny, cheap set-top boxes are available, but for the ultimate in control—and learning everything you could possibly want to know about your media collection, home network, and PC gear, there’s nothing like doing it yourself. We have a ton of guides on how to do it, including our recommended (if not dated) setup, and some tips to help you even after you’re all set up. This just all reminds me we should update our own HTPC recommendations, and do it ourselves, too.

4. Roll Your Own VPN

We talk a lot about how valuable a good VPN is, and how you can find a trustworthy one that actually meets all of your needs—but sometimes the best option is the DIY option. In this case, you’ll still need to trust your ISP, but no more than you do now, since you’ll run your VPN from home, connect to it when you’re out and about, and hide your surfing from prying eyes wherever you go.

If you have an old Mac, it’s super easy to do with macOS Server, and if you have a Raspberry Pi, you can use one of those too. You can even combine your Pi VPN with Tor for added anonymity with your security. If you have neither, grab an old PC or laptop and try Amahi, it’ll get the job done too.

3. Build A Fully-Functional Arcade Table or Cabinet

If you’re interested in learning not just a lot about tech, but about carpentry and DIY woodworking, an arcade cabinet is your best bet. We have a couple of starter guides on how to make one out of your coffee table, and some tips to inspire you to tackle it, but if you don’t want yours in the center of the living room, this IKEA hack will get you a good-looking one off on the side of your office or game room., and this one is even two-player.

As always, if that’s not your style, you can always turn an old PC into your personal arcade, even if you don’t stand it up in your own cabinet or anything. As always, if you have a Raspberry Pi, it’s perfect for that too. Just make sure you pick the right software for your needs, and you’ll have fun making it, and then even more fun playing it.

2. Host Your Own RSS Feed Reader

Feedly is great, don’t get us wrong, but many of us still lament the death of the simple, elegant, just-right-feature-wise Google Reader. For those folks, there’s no getting around it: You just have to do it yourself to have the kind of control you want.

Of course, we showed you how to roll your own using Tiny Tiny RSS, and it even has a mobile app so you can read on the go. It’s not alone though, and you’ll learn a lot about how the web works and keep up with your favorite blogs (like this one!) if you try it out.

1. Build Your Own Computer

There’s nothing like building your own PC to teach you not just about how computers work, but also how to troubleshoot your own computer problems and be self-sufficient if you have issues.

You’ll learn all about the internals of your PC by building your own rig, and if you’re a gamer or video producer or YouTuber, or audio enthusiast, you can customize your computer to suit your needs specifically. It’s way better than buying off the shelf—and depending on your needs, may even be more cost-effective too. Stick close to our PC build guides (we’re updating them right now), and keep these first-timer tips in mind to make the most of the experience.

Illustration by Angelica Alzona. Additional photos by Shinichi Haramizu and ayaita.


Lifehacker’s Weekend Roundup gathers our best guides, explainers, and other posts on a certain subject so you can tackle big projects with ease. For more, check out our Weekend Roundup and Top 10 tags.

from Lifehacker http://ift.tt/2gqgq5j
via IFTTT

Finally, A Little Drone Made Specifically For Taking Selfies

selfie-drone.jpg

This is the AirSelfie, a little $190 drone made specifically for taking selfies. It has a 5 megapixel camera, shoots both still shots and video, is controlled via an IOS/Android smartphone app, sends photos directly to your phone through Wi-Fi, and can fly for about five minutes before needing a charge. Alternatively, use the selfie stick you’ve got. It’s way cheaper and simpler. Plus you won’t get mad at yourself if you break it or lose it because it cost like five dollars. Maybe less. I got one in a gift bag at a birthday party I wasn’t even invited to.

Keep going for their Kickstarter video.

selfie-drone-2.jpg

Thanks to Jenness, who agrees it’s never not AirSelfie hunting season. You don’t even need a license.

Related

The city of Sugar Land, Texas recently installed this statue of two girls taking a selfie outside City Hall. And now residents are pissed because 1. it’s…

June 3, 2016

This is a video of the Half-Life 2 city scanner drone built-from-scratch by Russian Valentin Demchenko. It’s really impressive and it’s only a matter of time until…

June 1, 2016

This is the Bridal Selfie Stick introduced by fashion designer Reem Acra at her recent 2016 fall bridal show (I’m just trying to ignore the headphones). It’s…

October 14, 2015

blog comments powered by Disqus

from Geekologie – Gadgets, Gizmos, and Awesome http://ift.tt/2fkkoLk
via IFTTT

Apple replacing a small number of iPhone 6s batteries

Apple has let a cat out of its bag, the cat in this case being that there’s a problem with some iPhone 6s models. According to the company, a fault with the battery is causing a "very small number" of handsets to randomly shut down. If you’re rocking a device that was manufactured between September and October 2015, then you’re eligible for a replacement. Simply head down to your local Apple Store or authorized service provider to have your serial number checked and, if you qualify, you’ll get a replacement device.

Also, if you have already found this problem and paid for a battery replacement out of your own pocket, Apple will refund you. It’s not the first issue the firm has had to address in recent weeks, which was the iPhone 6 Plus’ "Touch Disease." That’s where a flickering gray bar appears across the top of the screen and multitouch issues render the device unusable until fully repaired. Although, in that case, the company still wants $149 for its trouble — you don’t manage to get $231.5 billion in your back pocket being generous.

Via: TechCrunch

Source: Apple

from Engadget http://ift.tt/2gukhOY
via IFTTT

Something really crazy is happening in the Arctic

At a time when sea ice should be expanding, it’s actually shrinking

A crazy decrease in sea ice

Changes in the concentration of Arctic sea ice between Nov. 12 and 19, 2016 are seen in this animation of satellite data. The North Pole is at the center. Areas with 100 percent coverage of ice are depicted in white. Lighter to darker blue tones are indicative of decreasing concentrations. And areas with no ice are in gray. Ice actually decreased within the area circled in red in the first frame of the animation. (Data: University of Bremen. Images: Polar View. Animation: Tom Yulsman)

Sea ice in the Arctic has been trending at record low levels since the third week of October — and now, something really crazy is happening up there.

The Arctic is heading into the dead of winter, and across a vast swath of territory, the polar night has descended, with 24 hours of darkness each day. This is when temperatures should be plunging, and sea ice should be expanding rapidly.

Instead, temperatures are soaring, and sea ice is actually shrinking. 

This shouldn’t be happening.

To be clear, sea ice is growing in some areas. But since Nov. 16th, the overall trend has been downward.

That’s largely because sea ice has been contracting significantly in the Arctic Ocean adjacent to the island archipelagos of Svalbard and Franz Josef Land. You can watch it happen in the Tweeted animation above, and also in the animation at the top of this story. Both are based on data acquired by the AMSR2 instrument aboard Japan’s GCOM-W satellite.

Crazy decrease in sea ice

Arctic sea ice extent is charted here for each year since 1979. This year’s extent was already trending at record-low levels. Now, ice coverage has stopped expanding completely — and has begun to shrink. (Source: NSIDC)

What the heck is going on?

Against the backdrop of human-caused global warming — a phenomenon that has caused the Arctic to warm twice as fast as any other region on Earth — the region is having a particularly difficult time cooling down this winter. As Bob Henson puts it in an in-depth story titled “Crazy Cryosphere” at Weather Underground’s Category-6 blog:

Temperatures north of 80°N smashed records for warmth throughout the winter of 2015-16. Now they’re on an even more torrid pace.

Here’s one way to think of what’s happening: Natural variation has long brought both cooler and warmer than average temperatures to the Arctic in all seasons. But now, we humans are putting our finger on the scale, tipping the balance in favor of excess warmth.

And right now, we’re talking crazy warm:

Crazy high temperatures in the Arctic

Temperature departures from normal, as forecast by the GFS weather model for the five-day period beginning Sunday, Nov. 20. (Source: Climate Reanalyzer/University of Maine)

In the map above, the brightest red tones are indicative of temperatures that are more than 35 degrees Fahrenheit warmer than normal. For areas of the Barents Sea near Svalbard and Franz Josef Land, that has meant above freezing air temperatures near the surface.

A complex set of factors seems to be at play.

Back in August, two large storms in the Arctic helped break up sea ice and may have stirred up warm water from the depths, according to Ted Scambos, a senior research scientist at the National Snow and Ice Data Center. This helped drive Arctic sea ice to a near-record-low in September.

SEE ALSO: Arctic sea ice trends near a record low for this time of year

In fact, record lows have been set for Arctic sea ice on 160 days so far this year, according to Zack Labe, a PhD student studying Arctic climate at the University of California at Irvine.

This means vast swaths of the ocean surface were exposed to sunlight for a longer period than normal. Ice is very effective at reflecting solar energy. But dark surface waters absorb it readily. So water already warmed by the stirring action of the storms was able to get warmer still, thereby hindering formation of sea ice.

Now, those waters are giving off some of that energy to the atmosphere, probably accounting in part for the unusual air temperatures at the surface right now. At the same time, atmospheric circulation patterns have been pumping warm air into the region from the south, apparently slowing the cool-down of ocean waters.

The result: a truly bizarre situation in the Arctic.

Meanwhile, a large portion of Eurasia has been crazy cold and snowy. You can see this in the blue and purple colors in the temperature anomaly map above. As Bob Henson of Category-6 explains:

It’s as if the hemisphere’s entire allotment of chilly, snowy weather has been rounded up and consigned to one area, albeit a big one. For this, we can credit or blame what’s called a “wave one” pattern, where the upper-level circulation around the North Pole is dominated by a single elongated loop, shunted in this case toward the Eurasian side.

Meanwhile, at the other pole, Antarctic sea ice is also trending at record-low levels. Head over to Henson’s post for more details about that. Here’s the link again: http://bit.ly/2eUJrG1

from Discover Main Feed http://ift.tt/2fSMHOU
via IFTTT