New measurement confirms: The ozone is coming back

Enlarge /

Each year’s ozone hole is a little bit different.

The Montreal Protocol, which went into effect in 1989, is a rare instance of a global agreement to solve a global problem: the release of vast quantities of ozone-destroying chemicals into the atmosphere. In the decades since, however, changes in ozone have been small and variable, making it hard to tell whether the protocol is making any difference.

But evidence has been building that the ozone layer is recovering, and a new paper claims to have directly measured the ozone hole gradually filling back in.

CFCs and ozone

During the 1970s and ’80s, evidence had been building that a class of industrial chemicals, the chloro-flurocarbons (CFCs), were damaging the ozone layer, a region of the stratosphere rich in this reactive form of oxygen. Ozone is able to absorb UV light that would otherwise reach the Earth’s surface, where it’s capable of damaging DNA. But the levels of ozone had been dropping, which ultimately resulted in a nearly ozone-free “hole” above the Antarctic.

The ozone hole spurred countries and companies into action. As companies developed replacements for CFCs, countries negotiated an international agreement that would limit and phase out their use. The Montreal Protocol codified that agreement, and it is widely credited with reducing (though not eliminating) the CFCs in our atmosphere.

But determining whether the protocol is having the desired effect on the ozone layer has been challenging. Ozone is naturally generated in the stratosphere at a very slow rate, and the amount of destruction that takes place over the Antarctic varies from year to year. Hints of a recovery have often been followed by years in which ozone levels drop again. Recovery has been so slow, in fact, that it’s possible to find people who claim the whole thing was a scam—and even a conspiracy designed to test whether it was possible to create a similar agreement for greenhouse gasses.

The challenge with getting definitive measurements are legion. To begin with, CFCs aren’t the only chemicals that can react with ozone, so tying things to them is tricky. In addition, the weather has a big influence on ozone’s destruction. The atmosphere over Antarctica picks up the CFCs during the Southern Hemisphere summer as they’re brought down by winds from the mid-latitudes. Over the winter, strong winds block off a variable-sized region above the pole where the hole develops.

In addition to winds, temperature also has a large effect on the rate of the ozone-destroying reactions, which take place on ice crystals.

As a result of all these factors, the amount of ozone destroyed and the physical size of the hole vary from year to year. While there have been some strong hints of the ozone hole’s recovery, scientists have ended up arguing over whether they’re statistically significant.

Tracking chemicals over the Antarctic

Which brings us to the new research. Early last decade, NASA launched the Aurora satellite, which was specifically designed to track the chemical composition of the atmosphere. Two researchers from NASA’s Goddard Space Flight Center, Susan Strahan and Anne Douglass, have now analyzed a dozen years of Aurora data to directly tie ozone recovery to the drop in CFCs.

To begin with, the team found a proxy for the amount of chemicals that arrived over Antarctica during the Southern Hemisphere summer. Nitrous oxide is also brought in on the winds, and unlike CFCs, it doesn’t undergo reactions over the course of the winter. Thus, it can serve as a trace for the amount of CFCs that are available each winter.

Strahan and Douglass use some additional chemistry to demonstrate that nitrous oxide levels correlate with the amount of CFC present. They focus on measuring chlorine levels at the end of the winter season. By this point, in addition to destroying ozone, the chlorine in the CFCs would have reacted with methane and produced hydrochloric acid. Thus, the amount of hydrochloric acid present should also relate to the starting concentration of CFCs.

This data showed that the overall levels of CFCs and nitrous oxide were highly correlated but that the ratio of the two was slowly shifting as the total amount of chlorine was trending downward. That’s a key piece of data, as it shows the Montreal Protocol is working as intended—fewer CFCs are being delivered to Antarctica each year.

The next step is to compare those numbers to ozone concentrations within the hole. This was done simply by comparing the concentrations of ozone over 10-day windows in the early and late winter. Rather than providing an absolute measure of the amount of ozone present, the comparison gives an indication of what percentage of ozone is being destroyed each winter.

Overall, the researchers show that the amount of chlorine over the Antarctic is declining at a rate of 25 parts-per-trillion each year (that’s 0.8 percent), although the weather-driven variability means that there are some years it increases. This resulted in a 20-percent drop in ozone destruction during this period.

The good news is that the variability in chlorine present paralleled that of the annual ozone destruction, validating both the overall science behind ozone destruction and nicely matching detailed chemical models of ozone’s sensitivity to CFCs.

It’s hard to summarize these findings any better than the authors do: “All of this is evidence that the Montreal Protocol is working—the chlorine is decreasing in the Antarctic stratosphere, and the ozone destruction is decreasing along with it.”

Geophysical Research Letters, 2017. DOI: 10.1002/2017GL074830  (About DOIs).

from Ars Technica http://ift.tt/2CCO42K
via IFTTT

Meltdown and Spectre: Here’s what Intel, Apple, Microsoft, others are doing about it

The Meltdown and Spectre flaws—two related vulnerabilities that enable a wide range of information disclosure from every mainstream processor, with particularly severe flaws for Intel and some ARM chips—were originally revealed privately to chip companies, operating system developers, and cloud computing providers. That private disclosure was scheduled to become public some time next week, enabling these companies to develop (and, in the case of the cloud companies, deploy) suitable patches, workarounds, and mitigations.

With researchers figuring out one of the flaws ahead of that planned reveal, that schedule was abruptly brought forward, and the pair of vulnerabilities was publicly disclosed on Wednesday, prompting a rather disorderly set of responses from the companies involved.

There are three main groups of companies responding to the Meltdown and Spectre pair: processor companies, operating system companies, and cloud providers. Their reactions have been quite varied.

What Meltdown and Spectre do

A brief recap of the problem: modern processors perform speculative execution. To maximize performance, they try to execute instructions even before it is certain that those instructions need to be executed. For example, the processors will guess at which way a branch will be taken and execute instructions on the basis of that guess. If the guess is correct, great; the processor got some work done without having to wait to see if the branch was taken or not. If the guess is wrong, no big deal; the results are discarded and the processor resumes executing the correct side of the branch.

While this speculative execution does not alter program behavior at all, the Spectre and Meltdown research demonstrates that it perturbs the processor’s state in detectable ways. This perturbation can be detected by carefully measuring how long it takes to perform certain operations. Using these timings, it’s possible for one process to infer properties of data belonging to another process—or even the operating system kernel or virtual machine hypervisor.

This information leakage can be used directly; for example, a malicious JavaScript in a browser could steal passwords stored in the browser. It can also be used in tandem with other security flaws to increase their impact. Information leakage tends to undermine protections such as ASLR (address space layout randomization), so these flaws may enable effective exploitation of buffer overflows.

Meltdown, applicable to virtually every Intel chip made for many years, along with certain high-performance ARM designs, is the easier to exploit and enables any user program to read vast tracts of kernel data. The good news, such as it is, is that Meltdown also appears easier to robustly guard against. The flaw depends on the way that operating systems share memory between user programs and the kernel, and the solution—albeit a solution that carries some performance penalty—is to put an end to that sharing.

Spectre, applicable to chips from Intel, AMD, and ARM, and probably every other processor on the market that offers speculative execution, too, is more subtle. It encompasses a trick testing array bounds to read memory within a single process, which can be used to attack the integrity of virtual machines and sandboxes, and cross-process attacks using the processor’s branch predictors (the hardware that guess which side of a branch is taken and hence controls the speculative execution). Systemic fixes for some aspects of Spectre appear to have been developed, but protecting against the whole range of fixes will require modification (or at least recompilation) of at-risk programs.

Intel

Intel

So, onto the responses. Intel is the company most significantly affected by these problems. Spectre hits everyone, but Meltdown only hits Intel and ARM. Moreover, it only hits the highest performance ARM designs. For Intel, virtually every chip made for the last five, ten, and possibly even 20 years is vulnerable to Meltdown.

The company’s initial statement, produced on Wednesday, was a masterpiece of obfuscation. It contains many statements that are technically true—for example, “these exploits do not have the potential to corrupt, modify, or delete data”—but utterly beside the point. Nobody claimed otherwise! The statement doesn’t distinguish between Meltdown—a flaw that Intel’s biggest competitor, AMD, appears to have dodged—and Spectre and, hence, fails to demonstrate the unequal impact on the different company’s products.

Follow-up material from Intel has been rather better. In particular, this whitepaper describing mitigation techniques and future processor changes to introduce anti-Spectre features appears sensible and accurate.

For the Spectre array bounds problem, Intel recommends inserting a serializing instruction (lfence is Intel’s choice, though there are others) in code between testing array bounds and accessing the array. Serializing instructions prevent speculation: every instruction that appears before the serializing instruction must be completed after the serializing instruction can begin to execute. In this case, it means that the test of the array bounds must have been definitively calculated before the array is ever accessed; no speculative access to the array that assumes that the tests succeed are allowed.

Less clear is where these serializing instructions should be added. Intel says that heuristics can be developed to figure out the best places in a program to include them but warns that they probably shouldn’t be used with every single array bounds test; the loss of speculative execution imposes too high a penalty. One imagines that perhaps array bounds that come from user data should be serialized and others left unaltered. This difficulty underscores the complexity of Spectre.

For the Spectre branch prediction attack, Intel is going to add new capabilities to its processors to alter the behavior of branch prediction. Interestingly, some existing processors that are already in customer systems are going to have these capabilities retrofitted via a microcode update. Future generation processors will also include the capabilities, with Intel promising a lower performance impact. There are three new capabilities in total: one to “restrict” certain kinds of branch prediction, one to prevent one HyperThread from influencing the branch predictor of the other HyperThread on the same core, and one to act as a kind of branch prediction “barrier” that prevents branches before the “barrier” from influencing branches after the barrier.

These new restrictions will need to be supported and used by operating systems; they won’t be available to individual applications. Some systems appear to already have the microcode update; everyone else will have to wait for their system vendors to get their act together.

The ability to add this capability with a microcode update is interesting, and it suggests that the processors already had the ability to restrict or invalidate the branch predictor in some way—it was just never publicly documented or enabled. The capability likely exists for testing purposes.

Intel also suggests a way of representing certain branches in code with “return” instructions. Patches to enable this have already been contributed to the gcc compiler. Return instructions don’t get branch predicted in the same way so aren’t susceptible to the same information leak. However, it appears that they’re not completely immune to branch predictor influence; a microcode update for Broadwell processors or newer is required to make this transformation a robust protection.

This approach would require every vulnerable application, operating system, and hypervisor to be recompiled.

For Meltdown, Intel is recommending the operating system level fix that first sparked interest and intrigue late last year. The company also says that future processors will contain some unspecified mitigation for the problem.

AMD

AMD’s response has a lot less detail. AMD’s chips aren’t believed susceptible to the Meltdown flaw at all. The company also says (vaguely) that it should be less susceptible to the branch prediction attack.

The array bounds problem has, however, been demonstrated on AMD systems, and for that, AMD is suggesting a very different solution from that of Intel: specifically, operating system patches. It’s not clear what these might be—while Intel released awful PR, it also produced a good whitepaper, whereas AMD so far has only offered PR—and the fact that it contradicts both Intel (and, as we’ll see later, ARM’s) response is very peculiar.

AMD’s behavior before this all went public was also rather suspect. AMD, like the other important companies in this field, was contacted privately by the researchers, and the intent was to keep all the details private until a coordinated release next week, in a bid to maximize the deployment of patches before revealing the problems. Generally that private contact is made on the condition that any embargo or non-disclosure agreement is honored.

It’s true that AMD didn’t actually reveal the details of the flaw before the embargo was up, but one of the company’s developers came very close. Just after Christmas, an AMD developer contributed a Linux patch that excluded AMD chips from the Meltdown mitigation. In the note with that patch, the developer wrote, “The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.”

It was this specific information—that the flaw involved speculative attempts to access kernel data from user programs—that arguably led to researchers figuring out what the problem was. The message narrowed the search considerably, outlining the precise conditions required to trigger the flaw.

For a company operating under an embargo, with many different players attempting to synchronize and coordinate their updates, patches, whitepapers, and other information, this was a deeply unhelpful act. While there are certainly those in the security community that oppose this kind of information embargo and prefer to reveal any and all information at the earliest opportunity, given the rest of the industry’s approach to these flaws, AMD’s action seems, at the least, reckless.

ARM

The inside of the ExoKey, with its Atmel ARM-based CPU.
Enlarge /

The inside of the ExoKey, with its Atmel ARM-based CPU.

ARM’s response was the gold standard. Lots of technical detail in a whitepaper, but ARM chose to let that stand alone, without the misleading PR of Intel or the vague imprecision of AMD.

For the array bounds attack, ARM is introducing a new instruction that provides a speculation barrier; similar to Intel’s serializing instructions, the new ARM instruction should be inserted between the test of array bounds and the array access itself. ARM even provides sample code to show this.

ARM doesn’t have a generic approach for solving the branch prediction attack, and, unlike Intel, it doesn’t appear to be developing any immediate solution. However, the company notes that many of its chips already have systems in place for invalidating or temporarily disabling the branch predictor and that operating systems should use that.

ARM’s very latest high-performance design, the Cortex A-75, is also vulnerable to Meltdown attacks. The solution proposed is the same as Intel suggests, and the same that Linux, Windows, and macOS are known to have implemented: change the memory mapping so that kernel memory mappings are no longer shared with user processes. ARM engineers have contributed patches to Linux to implement this for ARM chips.

from Ars Technica http://ift.tt/2CZjuy0
via IFTTT

Netflix Is Apple’s Most Likely Target, Analysts Claim

Netflix Is Apple’s Most Likely Target, Analysts Claim

Apple has a 40 percent chance of buying Netflix in 2018 according to one pair of market analysts. While the number is somewhat arbitrary, by the analysts’ logic it might now be a better than even chance.

The prediction comes from Asiya Merchant and Jim Suva of Citi, who listed the chances of several possible takeovers. Others included Activision, Electronic Arts and Take-Two at around 10 percent and Tesla and Hulu as longer shots.

The figures effectively suggest it’s a guaranteed certainty that Apple will buy one (and exactly one) major company this year. While that clearly makes no statistical sense, there is something behind it. The analysts note that the recently-passed US tax law changes mean Apple has a one-off chance to bring money held overseas back to the US with a heavily reduced tax bill. That means it’s in Apple’s interest to spend the money by striking a takeover deal.

Although only just made public, the paper behind the forecast was clearly written several weeks ago as it shows Disney as a 20-30 percent chance. That’s now out of the window, meaning Netflix is effectively rated as having as much or more chance of being taken over than all the other listed countries put together.

The logic appears to be that while streaming video is a complementary product to Apple’s devices, it’s an area where it doesn’t have any particular expertise and it’s likely a case of if you can’t beat them, but them.

Advertisement

from [Geeks Are Sexy] Technology News http://ift.tt/2qfZdRY
via IFTTT

Roombas Could Find Wi-Fi Dead Zones

Roombas Could Find Wi-Fi Dead Zones

A Roomba robot vacuum will soon be able to map the Wi-Fi coverage in your home. It’s an official feature that makes an existing hack a little easier to use.

The idea itself isn’t particularly new as a few folk had figured out it’s a slightly more efficient way to track coverage around a home without the hassle of running a signal measuring app and then painstakingly walking round every corner of the home yourself. Roomba is simply taking away the need to create a custom app (combining location and signal monitoring) and fix a phone to the cleaner.

To start with the feature will only be available on the Roomba 900 series as part of an application-only beta program.

There is a catch however: Roomba’s chief executive has previously said the company is looking at selling data collected from the cleaners to major tech firms. This would apply to all models, not just those running the Wi-Fi tracker.

It’s definitely a case of the technology being neutral and the usage anything but. On the upside, Roomba is stressing the benefits if, for example, smart home speakers could tell you the best place to put them, or air conditioning and lighting could adjust to specific room layouts.

On the downside, there’s clearly room for targeted marketing if a company has a rough idea of, for example, what furniture you do and don’t already have.  To its credit, Roomba insists selling customer data will only work on an opt-in basis.

Advertisement

from [Geeks Are Sexy] Technology News http://ift.tt/2CRciDT
via IFTTT

Shellfish Industry, Scientists Wrestle With Potentially Deadly Toxic Algae Bloom

Bangs Island Mussels worker Jon Gorman sets juvenile mussels onto a rope that will be their home for the next year as they grow to market size.

Fred Bever/Maine Public Radio


hide caption

toggle caption

Fred Bever/Maine Public Radio

Bangs Island Mussels worker Jon Gorman sets juvenile mussels onto a rope that will be their home for the next year as they grow to market size.

Fred Bever/Maine Public Radio

A new threat to New England’s shellfish industry seems to be establishing itself more firmly, and regulators are trying to stay ahead of potentially deadly blooms of toxic algae that may be driven by climate change.

Thirty years ago, four people died from amnesic shellfish poisoning after eating cultured mussels from Canada’s Price Edward Island. The mussels contained domoic acid, a neurotoxin produced by a class of algae called pseudo-nitzschia. The toxin turned up in PEI mussels the next year, but for decades after that wasn’t heard from again on the Eastern Seaboard.

Then, in the fall of 2016, toxin-bearing pseudo-nitszchia bloomed off Down East Maine in areas that previously never saw an algae bloom, as well as off Massachusetts and Rhode Island. Regulators in Maine have closed Down East shellfish harvests twice since then.

Now for the first time, a pseudo-nitzschia bloom is plaguing a large swathe of Casco Bay, from south and east of Portland to South Harpswell. Much of Casco Bay has been closed to shellfish harvesting for weeks.

That’s a worry here on an aquaculture raft anchored off windy Falmouth, where Matt Moretti shovels bushels of juvenile mussels to be returned to the sea until they grow to market size.

Moretti, who co-owns Bangs Island Mussels, is not so happy about the ongoing algae bloom, and he’s keeping his fingers-crossed that it won’t affect his harvest. The company already suffered from an extended closure in the spring, when a different bloom — the annual red tide that Maine has long been familiar with — shut down the operation for 10 weeks.

“It’s difficult. If we were shut down now it would be a double whammy for a really tough year, and it would be bad,” he says.

Bangs Island continues to bring mussels to market right now, because it’s able to give samples from each shipment to a state lab for testing. And so far, so good. But only the larger aquaculture lease operations like this one have that capacity.

Early this month, the state instituted a broad closure of shellfish harvesting in the bay — mussels, scallops, oysters, quahogs, clams. For smaller outfits who can’t do their own testing, such as Jared Lavers, who digs for wild clams in Freeport, it’s a matter of finding other means until the bloom subsides.

“I have since hopped on a couple different lobster boats to try to fill in the gap, but I’m not able to make as much money as I normally would. I’m doing everything I can to keep my head above water,” he says.

“It’s unprecedented to have a major biotoxin closure like this in December,” says Kohl Kanwit, who directs the public health bureau of Maine’s Department of Marine Resources. “[It’s] particularly terrible for families who depend on clamming or shellfish livelihoods.”

But given the stakes for public health, there’s not much choice, she says. In the earlier blooms of pseudo-nitzschia, she says the state had play catch up after recalls of several shipments of shellfish that might have been exposed before the threat were detected.

Now, Kanwit says more frequent testing for the algae and for toxin buildup in shellfish is becoming the norm. When this bloom first surfaced, regulators moved quickly to impose a precautionary closure. Now, it’s all hands on deck, she says, for state regulators and scientists.

“It’s not like red tide where we have decades and decades of experience managing this. We don’t have any historic data here. So we are trying to gather as much information as we can while this bloom is going on,” she says. “We are archiving phytoplankton samples, we archive all kinds of shellfish samples, because one of the other things we want to look at is how do the different species respond to this particular toxin.”

The biological characteristics of the particular strain of pseudo-nitzschia in question — pseudo-nitzschia australis — are not well understood.

“The reason they produce the toxin we just don’t know,” says Mark Wells, a marine science professor at the University of Maine in Orono who has been studying toxic blooms on the West Coast for years.

Wells says the recent East Coast blooms may be associated with temperatures in the Gulf of Maine’s waters, which are warming faster than most water bodies worldwide.

“We’re wondering whether the warming in the surface may actually be selecting more for pseudo-nitzschia, so that in the fall, when the bloom happens, there’s more of a chance that pseudo-nitzschia will be the ones that are blooming,” he says.

The uncertainty about the wheres and whens of toxic blooms has Moretti and others in the industry strategizing for the future.

“We’re trying to build it into our business, some redundancy, some geographic separation to get out of a high-risk area or at least into a lesser risk area that has some different sort of time scale of blooms,” he says.

Redundancy will be costly but worth it, Moretti says, to protect the business against the unpredictability of a changing ecosystem.

Regulators, meanwhile, continue testing samples at some 300 sites around the state, including the Damariscotta River, the heart of Maine’s growing oyster harvest. So far the toxic bloom has not turned up there.

This story comes from the New England News Collaborative: Eight public media companies coming together to tell the story of a changing region, with support from the Corporation for Public Broadcasting.

from NPR Topics: News http://ift.tt/2EXyexG
via IFTTT

Amazon’s smart mirror patent teases fashion’s future

Amazon’s latest patent hints at what it might be like to get dressed in the future.

On Tuesday, the tech giant was granted a patent for a blended reality mirror, which could superimpose virtual clothing onto your reflection. It would also be capable of placing you in a virtual scene, like a beach or a restaurant, to match an outfit with the occasion.

The mirror would use a combination of mirrors, lights, projectors, displays and cameras to project the image of a certain setting onto the display, according to the patent.

“These visual displays can be used to alter scenes as perceived by users, for example, by adding objects to the scene that do not actually exist,” reads the patent.

While it’s unclear where the virtual clothes come from, this could potentially be a new way for Amazon to sell more apparel. In recent years, the company has rolled out several of its own in-house fashion labels, such as James & Erin women’s clothing and Franklin & Freeman men’s dress shoes.

It’s also not clear whether the mirror is currently in development at Amazon, but it hints at where the company’s mind is at. Amazon declined to comment.

amazon mirror patent

Related: The future of getting dressed: AI, VR and smart fabrics

While the smart mirror may seem like an unusual move for Amazon, it could serve as an extension for its newly launched product Echo Look ($200). The device serves as a style assistant to help you decide what to wear. It has a voice-controlled camera that snaps pictures of you in different outfits and works alongside an app.

After taking photos of you in two outfits, the Echo Look’s built-in Style Check tool decides which one looks best on you. It uses a combination of machine learning technology and human fashion specialists.

Other retailers are also featuring smart mirrors in stores. For example, Rebecca Minkoff’s connected store concept has mirrors that double as touchscreens, allowing customers to browse looks, colors and sizes. Nordstrom has also tested smart mirrors in fitting rooms.

from Business and financial news – CNNMoney.com http://ift.tt/2qm3yTP
via IFTTT