Computer Scientists Close In on Perfect, Hack-Proof Code

In the summer of 2015 a team of hackers attempted to take control of an unmanned military helicopter known as Little Bird. The helicopter, which is similar to the piloted version long-favored for US special operations missions, was stationed at a Boeing facility in Arizona. The hackers had a head start: At the time they began the operation, they already had access to one part of the drone’s computer system. From there, all they needed to do was hack into Little Bird’s onboard flight-control computer, and the drone was theirs.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences


When the project started, a “Red Team” of hackers could have taken over the helicopter almost as easily as it could break into your home Wi-Fi. But in the intervening months, engineers from the Defense Advanced Research Projects Agency had implemented a new kind of security mechanism—a software system that couldn’t be commandeered. Key parts of Little Bird’s computer system were unhackable with existing technology, its code as trustworthy as a mathematical proof. Even though the Red Team was given six weeks with the drone and more access to its computing network than genuine bad actors could ever expect to attain, they failed to crack Little Bird’s defenses.

“They were not able to break out and disrupt the operation in any way,” said Kathleen Fisher, a professor of computer science at Tufts University and the founding program manager of the High-Assurance Cyber Military Systems (HACMS) project. “That result made all of Darpa stand up and say, oh my goodness, we can actually use this technology in systems we care about.”

The technology that repelled the hackers was a style of software programming known as formal verification. Unlike most computer code, which is written informally and evaluated based mainly on whether it works, formally verified software reads like a mathematical proof: Each statement follows logically from the preceding one. An entire program can be tested with the same certainty that mathematicians prove theorems.

“You’re writing down a mathematical formula that describes the program’s behavior and using some sort of proof checker that’s going to check the correctness of that statement,” said Bryan Parno, who does research on formal verification and security at Microsoft Research.

The aspiration to create formally verified software has existed nearly as long as the field of computer science. For a long time it seemed hopelessly out of reach, but advances over the past decade in so-called “formal methods” have inched the approach closer to mainstream practice. Today formal software verification is being explored in well-funded academic collaborations, the US military and technology companies such as Microsoft and Amazon.

The interest occurs as an increasing number of vital social tasks are transacted online. Previously, when computers were isolated in homes and offices, programming bugs were merely inconvenient. Now those same small coding errors open massive security vulnerabilities on networked machines that allow anyone with the know-how free rein inside a computer system.

“Back in the 20th century, if a program had a bug, that was bad, the program might crash, so be it,” said Andrew Appel, professor of computer science at Princeton University and a leader in the program verification field. But in the 21st century, a bug could create “an avenue for hackers to take control of the program and steal all your data. It’s gone from being a bug that’s bad but tolerable to a vulnerability, which is much worse,” he said.

The Dream of Perfect Programs

In October 1973 Edsger Dijkstra came up with an idea for creating error-free code. While staying in a hotel at a conference, he found himself seized in the middle of the night by the idea of making programming more mathematical. As he explained in a later reflection, “With my brain burning, I left my bed at 2:30 a.m. and wrote for more than an hour.” That material served as the starting point for his seminal 1976 book, “A Discipline of Programming,” which, together with work by Tony Hoare (who, like Dijkstra, received the Turing Award, computer science’s highest honor), established a vision for incorporating proofs of correctness into how computer programs are written.

Kathleen Fisher, a computer scientist at Tufts University, leads the High-Assurance Cyber Military Systems (HACMS) project.
Kathleen Fisher, a computer scientist at Tufts University, leads the High-Assurance Cyber Military Systems (HACMS) project.Courtesy of Kelvin Ma/Tufts University

It’s not a vision that computer science followed, largely because for many years afterward it seemed impractical—if not impossible—to specify a program’s function using the rules of formal logic.

A formal specification is a way of defining what, exactly, a computer program does. And a formal verification is a way of proving beyond a doubt that a program’s code perfectly achieves that specification. To see how this works, imagine writing a computer program for a robot car that drives you to the grocery store. At the operational level, you’d define the moves the car has at its disposal to achieve the trip—it can turn left or right, brake or accelerate, turn on or off at either end of the trip. Your program, as it were, would be a compilation of those basic operations arranged in the appropriate order so that at the end, you arrived at the grocery store and not the airport.

The traditional, simple way to see if a program works is to test it. Coders submit their programs to a wide range of inputs (or unit tests) to ensure they behave as designed. If your program were an algorithm that routed a robot car, for example, you might test it between many different sets of points. This testing approach produces software that works correctly, most of the time, which is all we really need for most applications. But unit testing can’t guarantee that software will always work correctly because there’s no way to run a program through every conceivable input. Even if your driving algorithm works for every destination you test it against, there’s always the possibility that it will malfunction under some rare conditions—or “corner cases,” as they’re called—and open a security gap. In actual programs, these malfunctions could be as simple as a buffer overflow error, where a program copies a little more data than it should and overwrites a small piece of the computer’s memory. It’s a seemingly innocuous error that’s hard to eliminate and provides an opening for hackers to attack a system—a weak hinge that becomes the gateway to the castle.

“One flaw anywhere in your software, and that’s the security vulnerability. It’s hard to test every possible path of every possible input,” Parno said.

Actual specifications are subtler than a trip to the grocery store. Programmers may want to write a program that notarizes and time-stamps documents in the order in which they’re received (a useful tool in, say, a patent office). In this case the specification would need to explain that the counter always increases (so that a document received later always has a higher number than a document received earlier) and that the program will never leak the key it uses to sign the documents.

This is easy enough to state in plain English. Translating the specification into formal language that a computer can apply is much harder—and accounts for a main challenge when writing any piece of software in this way.

“Coming up with a formal machine-readable specification or goal is conceptually tricky,” Parno said. “It’s easy to say at a high level ‘don’t leak my password,’ but turning that into a mathematical definition takes some thinking.”

To take another example, consider a program for sorting a list of numbers. A programmer trying to formalize a specification for a sort program might come up with something like this:

For every item j in a list, ensure that the element j ≤ j+1

Yet this formal specification—ensure that every element in a list is less than or equal to the element that follows it—contains a bug: The programmer assumes that the output will be a permutation of the input. That is, given the list [7, 3, 5], she expects that the program will return [3, 5, 7] and satisfy the definition. Yet the list [1, 2] also satisfies the definition since “it is a sorted list, just not the sorted list we were probably hoping for,” Parno said.

In other words, it’s hard to translate an idea you have for what a program should do into a formal specification that forecloses every possible (but incorrect) interpretation of what you want the program to do. And the example above is for something as simple as a sort program. Now imagine taking something much more abstract than sorting, such as protecting a password. “What does that mean mathematically? Defining it may involve writing down a mathematical description of what it means to keep a secret, or what it means for an encryption algorithm to be secure,” Parno said. “These are all questions we, and many others, have looked at and made progress on, but they can be quite subtle to get right.”

Block-Based Security

Between the lines it takes to write both the specification and the extra annotations needed to help the programming software reason about the code, a program that includes its formal verification information can be five times as long as a traditional program that was written to achieve the same end.

This burden can be alleviated somewhat with the right tools—programming languages and proof-assistant programs designed to help software engineers construct bombproof code. But those didn’t exist in the 1970s. “There were many parts of science and technology that just weren’t mature enough to make that work, and so around 1980, many parts of the computer science field lost interest in it,” said Appel, who is the lead principal investigator of a research group called DeepSpec that’s developing formally verified computer systems.

Even as the tools improved, another hurdle stood in the way of program verification: No one was sure whether it was even necessary. While formal methods enthusiasts talked of small coding errors manifesting as catastrophic bugs, everyone else looked around and saw computer programs that pretty much worked fine. Sure, they crashed sometimes, but losing a little unsaved work or having to restart occasionally seemed like a small price to pay for not having to tediously spell out every little piece of a program in the language of a formal logical system. In time, even program verification’s earliest champions began to doubt its usefulness. In the 1990s Hoare — whose “Hoare logic” was one of the first formal systems for reasoning about the correctness of a computer program — acknowledged that maybe specification was a labor-intensive solution to a problem that didn’t exist. As he wrote in 1995:

Ten years ago, researchers into formal methods (and I was the most mistaken among them) predicted that the programming world would embrace with gratitude every assistance promised by formalization…. It has turned out that the world just does not suffer significantly from the kind of problem that our research was originally intended to solve.

Then came the Internet, which did for coding errors what air travel did for the spread of infectious diseases: When every computer is connected to every other one, inconvenient but tolerable software bugs can lead to a cascade of security failures.

“Here’s the thing we didn’t quite fully understand,” Appel said. “It’s that there are certain kinds of software that are outward-facing to all hackers in the Internet, so that if there is a bug in that software, it might well be a security vulnerability.”

Jeannette Wing, a computer scientist at Microsoft Research, is developing a formally verified protocol for the Internet.
Jeannette Wing, a computer scientist at Microsoft Research, is developing a formally verified protocol for the Internet.Courtesy of Jeannette M. Wing

By the time researchers began to understand the critical threats to computer security posed by the Internet, program verification was ready for a comeback. To start, researchers had made big advances in the technology that undergirds formal methods: improvements in proof-assistant programs like Coq and Isabelle that support formal methods; the development of new logical systems (called dependent-type theories) that provide a framework for computers to reason about code; and improvements in what’s called “operational semantics”—in essence, a language that has the right words to express what a program is supposed to do.

“If you start with an English-language specification, you’re inherently starting with an ambiguous specification,” said Jeannette Wing, corporate vice president at Microsoft Research. “Any natural language is inherently ambiguous. In a formal specification you’re writing down a precise specification based on mathematics to explain what it is you want the program to do.”

In addition, researchers in formal methods also moderated their goals. In the 1970s and early 1980s, they envisioned creating entire fully verified computer systems, from the circuit all the way to the programs. Today most formal methods researchers focus instead on verifying smaller but especially vulnerable or critical pieces of a system, like operating systems or cryptographic protocols.

“We’re not claiming we’re going to prove an entire system is correct, 100 percent reliable in every bit, down to the circuit level,” Wing said. “That’s ridiculous to make those claims. We are much more clear about what we can and cannot do.”

The HACMS project illustrates how it’s possible to generate big security guarantees by specifying one small part of a computer system. The project’s first goal was to create an unhackable recreational quadcopter. The off-the-shelf software that ran the quadcopter was monolithic, meaning that if an attacker broke into one piece of it, he had access to all of it. So, over the next two years, the HACMS team set about dividing the code on the quadcopter’s mission-control computer into partitions.

The team also rewrote the software architecture, using what Fisher, the HACMS founding project manager, calls “high-assurance building blocks”—tools that allow programmers to prove the fidelity of their code. One of those verified building blocks comes with a proof guaranteeing that someone with access inside one partition won’t be able to escalate their privileges and get inside other partitions.

Later the HACMS programmers installed this partitioned software on Little Bird. In the test against the Red Team hackers, they provided the Red Team access inside a partition that controlled aspects of the drone helicopter, like the camera, but not essential functions. The hackers were mathematically guaranteed to get stuck. “They proved in a machine-checked way that the Red Team would not be able to break out of the partition, so it’s not surprising” that they couldn’t, Fisher said. “It’s consistent with the theorem, but it’s good to check.”

In the year since the Little Bird test, Darpa has been applying the tools and techniques from the HACMS project to other areas of military technology, like satellites and self-driving convoy trucks. The new initiatives are consistent with the way formal verification has spread over the last decade: Each successful project emboldens the next. “People can’t really have the excuse anymore that it’s too hard,” Fisher said.

Verifying the Internet

Security and reliability are the two main goals that motivate formal methods. And with each passing day the need for improvements in both is more apparent. In 2014 a small coding error that would have been caught by formal specification opened the way for the Heartbleed bug, which threatened to bring down the Internet. A year later a pair of white-hat hackers confirmed perhaps the biggest fears we have about Internet-connected cars when they successfully took control of someone else’s Jeep Cherokee.

As the stakes rise, researchers in formal methods are pushing into more ambitious places. In a return to the spirit that animated early verification efforts in the 1970s, the DeepSpec collaboration led by Appel (who also worked on HACMS) is attempting to build a fully verified end-to-end system like a web server. If successful, the effort, which is funded by a $10 million grant from the National Science Foundation, would stitch together many of the smaller-scale verification successes of the last decade. Researchers have built a number of provably secure components, such as the core, or kernel, of an operating system. “What hadn’t been done, and is the challenge DeepSpec is focusing on, is how to connect those components together at specification interfaces,” Appel said.

Over at Microsoft Research, software engineers have two ambitious formal verification projects underway. The first, named Everest, is to create a verified version of HTTPS, the protocol that secures web browsers and that Wing refers to as the “Achilles heel of the Internet.”

The second is to create verified specifications for complex cyber-physical systems such as drones. Here the challenge is considerable. Where typical software follows discrete, unambiguous steps, the programs that tell a drone how to move use machine learning to make probabilistic decisions based on a continuous stream of environmental data. It’s far from obvious how to reason about that kind of uncertainty or pin it down in a formal specification. But formal methods have advanced a lot even in the last decade, and Wing, who oversees this work, is optimistic formal methods researchers are going to figure it out.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Go Back to Top. Skip To: Start of Article.

from Wired Top Stories http://ift.tt/2dhdPum
via IFTTT

Why Snap’s Spectacles Are Going to (Finally) Make Life Logging Cool

The human desire to log the realities of everyday life is something that technology companies have been trying to turn into a successful product for years. With a new pair of smart glasses on offer, the company behind Snapchat hopes that the answer lies in simplicity.

Some of the strongest proponents of the life-logging movement have recently retired from the pursuit. But those early adopters sought to record everything, from sleep and steps to calorie intake and mood. Ultimately, they found the process difficult and unrewarding.

Google’s attempt to capitalize on the phenomenon with its Glass project struggled to take off. First announced in April  2013, the device later became available as part of the Explorer Program for the princely sum of $1,500. It was ultimately scrapped as a commercial product last year, sidelined instead to research and workplace use.

Glass had many problems. Certainly, its price made it an exclusive item. Privacy advocates worried ceaselessly about people snapping images without permission, coining the fabulous term “Glasshole” along the way. But perhaps its biggest failing was Google’s attempt to shoehorn a small computer, display, camera, microphone, and more into a diminutive frame. This was Google trying to invent the future, and failing.

Now Snap—the new name of Snapchat—thinks it can do better.

Where Glass was an exercise in speculative future-gazing, Snap’s new Spectacles are a study in pragmatism. The new sunglasses, which are styled like Ray-Bans, allow the wearer to record first-person video at the press of a button, shooting up to 30 seconds of circular video with a 115-degree field of view. The clips can then be transferred to a smartphone via Wi-Fi or Bluetooth, to be added to Snapchat.

That’s it. Nothing else to it.

Their simplicity—the very decision not to even attempt to break new technological ground—could be the very thing that cements their place in the future. Perhaps their most compelling feature is that they have a clear and distinct purpose: you can wear them, acquire first-person images and video of what you have done, and then quickly upload that material to the Web.

Snap does give Spectacles one particularly innovative touch: the circular video that the device shoots. In a stroke, that’s solved—for Snapchat, at least—the annoying problem of video appearing in the wrong orientation if people hold their phone in landscape view.

At $130, the glasses are also easy on the wallet. Snap’s CEO, Evan Spiegel, referred to them as a “toy” in the Wall Street Journal interview in which they were announced, and he’s right. At that price, it’s a piece of hardware that can be purchased for good old-fashioned fun, or given to a teenager as a treat. Like any other toy, they’ll become their own advertising if they do sell well: while smartphones are inherently private devices, wearing a pair of Snap’s glasses alerts the world to what a person is doing, which may appeal to people looking to make a statement.

But that brings us to one of the major questions facing the device: who, exactly, is going to buy them?

The styling is undeniably youthful, and it’s easy enough to imagine a teenager lusting after a pair. But earlier this year, Snapchat’s vice president of content, Nick Bell, pointed out that two-thirds of its users are over 18, and 50 percent of new daily users are over 25. Those folks are more attractive to advertise to, because unlike teens, they tend to have money. But it’s not clear if Spectacles are the product to drive that adoption.

There’s also the pesky question of privacy. Glass suffered an onslaught of criticism for the ways in which it enabled snooping, and Snap will undoubtedly be seeking to avoid the same. It’s clearly at least considered the issue: the glasses light up when they’re recording, which may help. A little. But that feature is sure to be hacked within days of the device’s release.

What could act in Snap’s favor is volume. While Glass was the preserve of wealthy, middle-aged Silicon Valley types who were few and far between and easy to feel alienated by, cities awash with Spectacle-wearing Millennials could help normalize public video recording.

Not that will happen overnight. Spiegel told the Wall Street Journal that the company was “going to take a slow approach to rolling them out,” so that it can work through a process of “figuring out if it fits into people’s lives and seeing how they like it.” If it does, and they do, Spectacles could help Snap turn an idea that’s always proved unsuccessful into something that allows people to document their lives easily.

It could, perhaps, make life logging cool again.

(Read more: Wall Street Journal, “Google Glass Is Dead; Long Live Smart Glasses,” “Google Glass Finds a Second Act at Work,” “Life Logging Is Dead. Long Live Life Logging?”)

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2cXBahF
via IFTTT

Most of India Still Isn’t Online—Here’s Google’s Plan to Fix That

Google has made a series of announcements that make one thing very clear: it plans to take India’s Internet by storm.

The company is no stranger to working in India, with its railroad station Wi-Fi project already proving to be a notable success. But today it announced a series of new products and projects squarely aimed at the Indian market, signaling its intent to tap a market of 1.25 billion people where only 200 or 250 million are currently online.

Most notable among the announcements is a new YouTube app called Go, specially designed for users in developing countries with poor—and expensive—data connections. While the app does allow video downloads for later viewing, that’s not its real party trick. Instead, as Wired explains, it allows people to share video with others in their community using ad-hoc wireless networks, so users don’t have to chew through cellular data allowances. That feature was inspired by trips to India, where Google saw people sharing videos not as links, but via SD cards that were passed around.

In partnership with Google, Indian Railways’ RailTel offers free Wi-Fi Internet service in Mumbai’s central railway station. The service is due to be rolled out in 100 of the country’s busiest stations by the end of 2016.

The app isn’t the only data-saving scheme Google’s been working on. It’s also announced that Chrome for Android will now provide a Data Saver mode that will allow it to more aggressively compress audio and video, as well as downloading a specially streamlined version of Web pages. It’s claimed that MP4 files can be streamed using 67 percent less data, while a Web page ends up just 10 percent of the size. There will also be the option to download Web pages for later viewing. Clearly, there will be a loss of quality—but that probably doesn’t matter too much when you’re using a slow 2G connection.

Elsewhere, Google has also announced that it’s expanding its rail station Wi-Fi initiative, known as Google Station, into other locations such as cafes and malls, with plans to take it to other countries. And the company will be adding Hindi support to its new AI-powered Assistant.

You may, of course, be reading this and thinking that these features seem useful anywhere, and not just in underconnected countries like India. Google would agree. Writing in the Economic Times, Google CEO Sundar Pichai explains that the company is learning a lot from its work in the country, finding that many of its solutions to connectivity issues—such as Maps Offline—are also popular in the West. “In an increasingly mobile-first world, India gives us early insights into the future of the Internet,” he explains.

But make no mistake: India is a huge opportunity for Google. If Google can be the one to provide the services that new Internet users adopt, it will be able to recoup its investments using its favorite trick of all—advertising. Facebook has already tried as much in India with its Free Basics scheme, but that was banned because it breached the principles of net neutrality.

Google, though, looks set to do everything it can to succeed.

(Read more: Wired, Economic Times, The Verge, “Facebook and Google Are Racing to Supply India with Internet Access,” “How Assistant Could End Up Eating Google’s Lunch,” “India’s Blow Against Facebook Sets Up a Grand Experiment in Net Neutrality”)

from Technology Review Feed – Tech Review Top Stories http://ift.tt/2dhCae9
via IFTTT

Used Coffee Grounds Can Clean Contaminated Water

The earthy aroma of coffee in the morning is great for waking you up in the morning. But people who make a daily cup of joe generate, and toss, a helping of coffee grounds every day. Add restaurants and coffee shops into the mix and you can get tons of the brown dust. Now scientists figured out an innovative way to reduce this waste—and simultaneously help remove other toxic chemicals from the environment.

The researchers molded coffee grounds in the form of a spongey filter that absorbs heavy metals from water. The design and initial tests are described in the American Chemical Society’s journal Sustainable Chemistry and Engineering. “This product can help reduce the waste we generate,” says lead scientist Despina Fragouli, a materials scientist at the Istituto Italiano di Tecnologia.

It’s a pretty convenient solution. And though people have previously recycled coffee waste for fertilizer, animal feed, and even biodiesel, there wasn’t an efficient way of using it for water remediation until now. The chemicals in coffee, such as fatty acids, cellulose, and polyphenols, bind to heavy metals the same way activated charcoal does, but then the powder had to somehow be removed from water afterwards. So Fragouli and her team set about simplifying this process.

By adding sugar and silicone to coffee, the researchers were able to make a foamy brick that held together on its own and was easy to use as a filter. Soaking the brick in water caused the sugar to leach out and leave the coffee grounds open to bind to metal ions in the water.

In still water, a small 200-milligram piece of foam the size of an almond successfully removed 99 percent of lead and mercury ions within 30 hours. In flowing water, or water with a concentration of metal ions that exceeded 200 parts per billion, the foam’s efficiency was removed about 50 to 60 percent of the lead ions. That’s about the same efficiency that most commercially available filters have, according to Fragouli.

Fragouli says her team is working on improving the composition of the coffee and silicone mix so that it can eventually be the only filter you need. The scientists are also experimenting with other waste materials from agricultural and food industries that may have different chemical properties and could help filter out other chemicals.

“It’s very important that we find new resources to make polymers—materials that are reusable and that can replace the petroleum-based materials we currently use," says Fragouli.

from Popular Science – New Technology, Science News, The Future Now http://ift.tt/2d2wO8k
via IFTTT

Riding A Roller Coaster Could Help You Pass Small Kidney Stones

Every year around Spring Break there is a mass migration in which flocks of families from mid-Michigan flee the winter cold and head South, down to Orlando, Florida and the many theme parks there. They return a week later, satiated with fun and sun, cleansed of their worries by the vacation and their kidney stones by the roller coasters.

Yes, riding a moderately intense roller coaster for just a few minutes might be enough to dislodge those pesky kidney stones before they turn into an even bigger problem. In a paper published today in The Journal of the American Osteopathic Association, researchers showed that riding the Big Thunder Mountain Railroad at Disney World’s Magic Kingdom helped kidney stones dislodge.

Kidney stones are small mineral deposits that form in the kidney and can be very painful to pass through the urinary tract. They can result in about 300,000 emergency room visits in the United States each year, with annual costs of treating kidney stone patients at about $2.1 billion.

Urologist David Wartinger told Popular Science that the idea for the study came about eight years ago, after a patient came to see him after a spring break vacation down to Disney World. The patient had gone on a ride, the Big Thunder Mountain Railroad in Disney World’s Magic Kingdom three times in a row. That, in itself was not unusual. What caught Wartinger’s attention was the fact that after each ride the patient passed a small kidney stone, passing three stones in less than an hour.

"That really caught my attention, there was something going on here with roller coasters and renal calculi [kidney stones]" Wartinger says. "The problem was, how do you study something like that?"

Wartinger and his colleague, Marc Mitchell, tested the idea by using a silicone model kidney filled with urine and real kidney stones, then carrying it onto the ride in a backpack, positioned to align with the actual location of the kidneys. The silicone mold was 3D printed using a CT scan of an actual kidney. The silicone was chosen after thought was given to other options, including ballistic gel (which fell apart after use) and cow and pig kidneys, which the authors note in the paper would have been poor choices "owing to ambient temperature and the inappropriate display of such material in a family-friendly amusement park."

For the purposes of this study, they rode the Big Thunder Mountain Railroad 60 times with the fake kidney, noting the position of the kidney stones and re-positioning any that had become dislodged after each ride, and standing in line along with everyone else at Disney to determine where on the ride they would sit. They found that sitting in the front part of the ride resulted in a passage rate of 16.67 percent, while sitting in the back resulted in passing a kidney stone 63.89 percent of the time.

The exact mechanisms by which roller coasters help dislodge a kidney stone are still unknown, but Wartinger does have some ideas as to what’s going on. “We know that the moderate intensity coaster worked. You don’t need 70 mile an hour coasters, you don’t need precipitous drops, you don’t need inversions, you don’t need high speed turns. What I think is happening is we’re vibrating the stones loose.” Wartinger says.

But before everyone rushes to Disney World with doctor’s notes, it’s important to note that this was only one model with one set of kidney stones. The interior of kidneys are unique to each person, and kidney stones also grow in many different shapes and sizes, so this isn’t a one-size-fits-all roller coaster solution.

“The recommendation isn’t go to Disney World and ride this one coaster” Wartinger says. “Amusement theme park rides of moderate intensity have the potential to dislodge stones. So whatever coaster or park catches your fancy, you can probably benefit from attending.”

Wartinger says that three types of people could probably benefit the most from a roller coaster prescription: people that have small kidney stones, people that have had an operation to break a larger kidney stone into smaller pieces, and women who have small kidney stones and are trying to get pregnant (vitamin supplements taken with pregnancy can cause kidney stones to grow larger).

If he can get more cooperation from other theme parks, Wartinger hopes to expand the study to look at how other roller coasters affect kidney stones. And it’s not just people here on the ground that might benefit.

Kidney stones are a huge problem for astronauts on long-term missions in micro-gravity environments. While building a roller-coaster in space might be impossible, Wartinger says that it might be possible to build a device that could spin people around, jarring the stones loose, no roller-coaster required.

from Popular Science – New Technology, Science News, The Future Now http://ift.tt/2cwC6q6
via IFTTT