After “swatting” death in Kansas, 25-year-old arrested in Los Angeles

Enlarge /

A still from the Wichita Police footage of the shooting.

The alleged “swatter” behind Thursday’s police killing of a Wichita, Kansas, man has been arrested.

from Ars Technica http://ift.tt/2CqXWtQ
via IFTTT

After beating cable lobby, Colorado city moves ahead with muni broadband

Enlarge /

Still from an industry-funded ad warning against municipal broadband in Fort Collins, Colorado.

The city council in Fort Collins, Colorado, last night voted to move ahead with a municipal fiber broadband network providing gigabit speeds, two months after the cable industry failed to stop the project.

Last night’s city council vote came after residents of Fort Collins approved a ballot question that authorized the city to build a broadband network. The ballot question, passed in November, didn’t guarantee that the network would be built because city council approval was still required, but that hurdle is now cleared. Residents approved the ballot question despite an anti-municipal broadband lobbying campaign backed by groups funded by Comcast and CenturyLink.

The Fort Collins City Council voted 7-0 to approve the broadband-related measures, a city government spokesperson confirmed to Ars today.

“Last night’s three unanimous votes begin the process of building our city’s own broadband network,” Glen Akins, a resident who helped lead the pro-municipal broadband campaign, told Ars today. “We’re extremely pleased the entire city council voted to support the network after the voters’ hard fought election victory late last year. The municipal broadband network will make Fort Collins an even more incredible place to live.”

Net neutrality and privacy

While the Federal Communications Commission has voted to eliminate the nation’s net neutrality rules, the municipal broadband network will be neutral and without data caps.

“The network will deliver a ‘net-neutral’ competitive unfettered data offering that does not impose caps or usage limits on one use of data over another (i.e., does not limit streaming or charge rates based on type of use),” a new planning document says. “All application providers (data, voice, video, cloud services) are equally able to provide their services, and consumers’ access to advanced data opens up the marketplace.”

The city will also be developing policies to protect consumers’ privacy. FCC privacy rules that would have protected all Americans were eliminated by the Republican-controlled Congress last year.

The items approved last night (detailed here and here) provide a $1.8 million loan from the city’s general fund to the electric utility for first-year start-up costs related to building telecommunications facilities and services. Later, bonds will be “issued to support the total broadband build out,” the measure says.

The city intends to provide gigabit service for $70 a month or less and a cheaper Internet tier. Underground wiring for improved reliability and “universal coverage” are two of the key goals listed in the measure.

from Ars Technica http://ift.tt/2lQHM5R
via IFTTT

As of today, no US airlines operate the mighty Boeing 747

On Wednesday, Delta Airlines flight 9771 flew from Atlanta to Pinal Airpark in Arizona. It wasn’t a full flight—just 48 people on board. But it was a milestone—and not just for the two people who got married mid-flight—for it marked the very last flight of a Boeing 747 being operated by a US airline. Delta’s last scheduled passenger service with the jumbo was actually late in December, at which point it conducted a farewell tour and then some charter flights. But as of today, after 51 long years in service, if you want to ride a 747 you’ll need to be traveling abroad.

Way back in the 1960s, when the white heat of technological progress was burning bright, it looked for a while as if supersonic air travel was going to be the next big thing. France and Britain were collaborating on a new kind of airliner that would fly at twice the speed of sound and shrink the globe. But there was just one thing they hadn’t counted on: Boeing and its gargantuan 747 jumbo jet. The double-decker airliner wouldn’t break the sound barrier, but its vast size compared to anything else in the skies helped drop the cost of long-haul air travel, opening it up to the people in a way Concorde could never hope to do.

Boeing was already having a pretty good time selling its 707 jetliner, but Pan American Airlines boss Juan Trippe wanted something special for his passengers, and he approached the aircraft manufacturer with a request for a plane that could carry twice as many passengers as its bread-and-butter long-haul model. In 1966, Trippe signed an order for 25 of the new passenger airliners. The first of these entered service in 1970, and the world would never be the same again.

Since then, more than 1,500 747s have left Boeing’s factory in Everett, Washington. Most spent their lives carrying passengers for airlines or carrying freight around the world. But some special variants have lived more exciting lives, fighting forest fires, carrying presidents—even ferrying space shuttles. The US Air Force uses a small fleet of E-4Bs as airborne doomsday control centers, and it even tried using one for ballistic missile defense, complete with a giant laser poking out its nose. More outrageous (stillborn) proposals even wanted to use 747s as mobile cruise missile launchers or as airborne aircraft carriers for little jet fighters.

The 747’s long career has seen it fly billions of miles, carrying billions of passengers, but it also had its share of tragedies. In 1977, a pair of 747s (one KLM, one Pan Am) crashed into each other on the runway at Tenerife’s airport. In 1980, the USSR shot down a Korean Air Lines 747 after mistaking it for a US spy plane. Terrorist bombs destroyed two 747s mid-flight—an Air India 747 in 1985 and a Pan Am 747 in 1988—and several more had been hijacked in the 1970s. Other disasters resulted from poor maintenance or human error. Terrible as those incidents were, they should be seen in context: 61 747s (out of 1,540) have been lost since 1970, more than half of which came without any loss of life—jumbos are estimated to have carried more than 3.5 billion passengers since 1970.

On a personal note, the 747 has been a pretty important aircraft in my life. When my family moved from South Africa to the UK in the late 1970s, it was onboard a jumbo jet. And I’m pretty sure the same is true for my move to the US back in 2002. This past summer I crossed the Atlantic in 747s twice, most memorably sitting in seat 1A on one occasion.

If this post has you hankering to spend some time airborne in a jumbo, fret not; although no US passenger carriers still operate the big bird, several hundred remain in service with other airlines, most notably British Airways and Lufthansa. And if you happen to be an oligarch or Saudi prince, Boeing will happily build you your own 747-8—but don’t expect it to be cheap!

Listing image by Mike Kane/Bloomberg/Getty Images

from Ars Technica https://arstechnica.com/?p=1239663
via IFTTT

“Vote out” congresspeople who won’t back net neutrality, advocates say

Enlarge /

Democrats vs. Republicans.

Some supporters of net neutrality are focusing their attention on Congress and vowing to vote out lawmakers who won’t join a legislative effort to reinstate net neutrality rules.

“If they don’t vote for net neutrality, let’s vote them out,” says the website launched yesterday by advocacy group Fight for the Future, which also organized recent protests.

The website lists which senators have and haven’t supported a plan to use the Congressional Review Act (CRA) to stop the repeal of net neutrality rules. The rules, repealed by the Federal Communications Commission last month, prohibit Internet service providers from blocking or throttling Internet content or prioritizing content in exchange for payment.

Sen. Ed Markey (D-Mass.) announced the CRA resolution shortly after the FCC vote, and 29 senators including Markey have pledged to support it. All of the 29 are members of the Democratic caucus.

The bill just needs majority support in the Senate, which could happen if all Democrats sign on and get some Republican support.

“House and Senate leaders cannot block a CRA with majority support from coming to the floor,” the “Vote for Net Neutrality” website explains. “Net neutrality is not a partisan issue, but many Republicans in Congress have been on the wrong side of it recently. That’s changing. In the Senate, we may only need one more Republican to vote for the CRA to get it passed, given that Susan Collins (R-Maine) opposed the FCC plan and signaled openness to a CRA.”

Messages to lawmakers

Fight for the Future urges net neutrality supporters to send tweets to lawmakers that say, “I will not be voting for anyone who doesn’t vote for the CRA to save #NetNeutrality.” The group is also gathering phone numbers from people who want text message updates about their representatives’ voting records on net neutrality before this year’s congressional elections.

US Senate Minority Leader Chuck Schumer (D-N.Y.) has pledged to force a vote on reinstating net neutrality rules.

The House of Representatives has a bigger Republican majority, so about 20 Republicans would have to join Democrats for the CRA to be successful.

“That’s harder, but several Republican representatives have already criticized the FCC’s vote, and given that more than 75 percent of Republican voters support net neutrality, it’s doable,” Fight for the Future said.

The activists also want to stop Congress from passing legislation that only nominally protects net neutrality. Rep. Marsha Blackburn (R-Tenn.) is pushing an “Open Internet Preservation Act” that would ban blocking and throttling but allow ISPs to create paid fast lanes and prohibit state governments from enacting their own net neutrality laws. Blackburn’s bill would also prohibit the FCC from imposing any type of common carrier regulations on broadband providers.

“Lobbyists are foaming at the mouth at the chance to ram through bad legislation that permanently undermines net neutrality,” Fight for the Future co-founder Tiffiniy Cheng said in an announcement yesterday.

from Ars Technica http://ift.tt/2ArYuNb
via IFTTT

Comcast fired 500 despite claiming tax cut would create thousands of jobs

Comcast reportedly fired about 500 salespeople shortly before Christmas, despite claiming that the company would create thousands of new jobs in exchange for a big tax cut.

Comcast apparently tried to keep the firings secret while it lobbied for the tax cut that was eventually passed into law by the Republican-controlled Congress and signed by President Trump in late December. The Philadelphia Inquirer revealed the Comcast firings this week in an article based on information from an anonymous former employee, Comcast documents, and other sources in the company.

The former employee who talked to the Inquirer “could not be identified because of a nondisclosure agreement as part of a severance package,” the article said. The Inquirer headline notes that Comcast was able to implement the firings “quietly,” avoiding any press coverage until this week.

Ars asked Comcast today if all 500 fired employees had to sign those nondisclosure agreements, but we didn’t receive an answer. We also asked why the firings were necessary given that the tax cut was supposed to create more Comcast jobs, and we asked if Comcast has specific plans to create jobs in other areas.

Comcast gave us this statement but offered no further details: “Periodically, we reorganize groups of employees and adjust our sales tactics and talent. This change in the Central Division is an example of this practice and occurred in the context of our adding hundreds of frontline and sales employees. All these employees were offered generous severance and an opportunity to apply for other jobs at Comcast.”

A Comcast spokesperson also confirmed the firings to the Inquirer.

“Thousands of new direct and indirect jobs”

The firings happened around December 15. On December 20, Comcast announced that, because of the pending tax cut and recent repeal of net neutrality rules, it would give “special bonuses” of $1,000 to more than 100,000 employees and invest more than $50 billion in infrastructure over the next five years.

“With these investments, we expect to add thousands of new direct and indirect jobs,” Comcast said at the time.

We examined Comcast’s investment claims in an article on December 21. As it turns out, Comcast’s annual investments already soared during the two-plus years that net neutrality rules were on the books, and the $50 billion amount could be achieved if those investments simply continued increasing by a modest amount.

from Ars Technica http://ift.tt/2lUSCqO
via IFTTT

New measurement confirms: The ozone is coming back

Enlarge /

Each year’s ozone hole is a little bit different.

The Montreal Protocol, which went into effect in 1989, is a rare instance of a global agreement to solve a global problem: the release of vast quantities of ozone-destroying chemicals into the atmosphere. In the decades since, however, changes in ozone have been small and variable, making it hard to tell whether the protocol is making any difference.

But evidence has been building that the ozone layer is recovering, and a new paper claims to have directly measured the ozone hole gradually filling back in.

CFCs and ozone

During the 1970s and ’80s, evidence had been building that a class of industrial chemicals, the chloro-flurocarbons (CFCs), were damaging the ozone layer, a region of the stratosphere rich in this reactive form of oxygen. Ozone is able to absorb UV light that would otherwise reach the Earth’s surface, where it’s capable of damaging DNA. But the levels of ozone had been dropping, which ultimately resulted in a nearly ozone-free “hole” above the Antarctic.

The ozone hole spurred countries and companies into action. As companies developed replacements for CFCs, countries negotiated an international agreement that would limit and phase out their use. The Montreal Protocol codified that agreement, and it is widely credited with reducing (though not eliminating) the CFCs in our atmosphere.

But determining whether the protocol is having the desired effect on the ozone layer has been challenging. Ozone is naturally generated in the stratosphere at a very slow rate, and the amount of destruction that takes place over the Antarctic varies from year to year. Hints of a recovery have often been followed by years in which ozone levels drop again. Recovery has been so slow, in fact, that it’s possible to find people who claim the whole thing was a scam—and even a conspiracy designed to test whether it was possible to create a similar agreement for greenhouse gasses.

The challenge with getting definitive measurements are legion. To begin with, CFCs aren’t the only chemicals that can react with ozone, so tying things to them is tricky. In addition, the weather has a big influence on ozone’s destruction. The atmosphere over Antarctica picks up the CFCs during the Southern Hemisphere summer as they’re brought down by winds from the mid-latitudes. Over the winter, strong winds block off a variable-sized region above the pole where the hole develops.

In addition to winds, temperature also has a large effect on the rate of the ozone-destroying reactions, which take place on ice crystals.

As a result of all these factors, the amount of ozone destroyed and the physical size of the hole vary from year to year. While there have been some strong hints of the ozone hole’s recovery, scientists have ended up arguing over whether they’re statistically significant.

Tracking chemicals over the Antarctic

Which brings us to the new research. Early last decade, NASA launched the Aurora satellite, which was specifically designed to track the chemical composition of the atmosphere. Two researchers from NASA’s Goddard Space Flight Center, Susan Strahan and Anne Douglass, have now analyzed a dozen years of Aurora data to directly tie ozone recovery to the drop in CFCs.

To begin with, the team found a proxy for the amount of chemicals that arrived over Antarctica during the Southern Hemisphere summer. Nitrous oxide is also brought in on the winds, and unlike CFCs, it doesn’t undergo reactions over the course of the winter. Thus, it can serve as a trace for the amount of CFCs that are available each winter.

Strahan and Douglass use some additional chemistry to demonstrate that nitrous oxide levels correlate with the amount of CFC present. They focus on measuring chlorine levels at the end of the winter season. By this point, in addition to destroying ozone, the chlorine in the CFCs would have reacted with methane and produced hydrochloric acid. Thus, the amount of hydrochloric acid present should also relate to the starting concentration of CFCs.

This data showed that the overall levels of CFCs and nitrous oxide were highly correlated but that the ratio of the two was slowly shifting as the total amount of chlorine was trending downward. That’s a key piece of data, as it shows the Montreal Protocol is working as intended—fewer CFCs are being delivered to Antarctica each year.

The next step is to compare those numbers to ozone concentrations within the hole. This was done simply by comparing the concentrations of ozone over 10-day windows in the early and late winter. Rather than providing an absolute measure of the amount of ozone present, the comparison gives an indication of what percentage of ozone is being destroyed each winter.

Overall, the researchers show that the amount of chlorine over the Antarctic is declining at a rate of 25 parts-per-trillion each year (that’s 0.8 percent), although the weather-driven variability means that there are some years it increases. This resulted in a 20-percent drop in ozone destruction during this period.

The good news is that the variability in chlorine present paralleled that of the annual ozone destruction, validating both the overall science behind ozone destruction and nicely matching detailed chemical models of ozone’s sensitivity to CFCs.

It’s hard to summarize these findings any better than the authors do: “All of this is evidence that the Montreal Protocol is working—the chlorine is decreasing in the Antarctic stratosphere, and the ozone destruction is decreasing along with it.”

Geophysical Research Letters, 2017. DOI: 10.1002/2017GL074830  (About DOIs).

from Ars Technica http://ift.tt/2CCO42K
via IFTTT

Meltdown and Spectre: Here’s what Intel, Apple, Microsoft, others are doing about it

The Meltdown and Spectre flaws—two related vulnerabilities that enable a wide range of information disclosure from every mainstream processor, with particularly severe flaws for Intel and some ARM chips—were originally revealed privately to chip companies, operating system developers, and cloud computing providers. That private disclosure was scheduled to become public some time next week, enabling these companies to develop (and, in the case of the cloud companies, deploy) suitable patches, workarounds, and mitigations.

With researchers figuring out one of the flaws ahead of that planned reveal, that schedule was abruptly brought forward, and the pair of vulnerabilities was publicly disclosed on Wednesday, prompting a rather disorderly set of responses from the companies involved.

There are three main groups of companies responding to the Meltdown and Spectre pair: processor companies, operating system companies, and cloud providers. Their reactions have been quite varied.

What Meltdown and Spectre do

A brief recap of the problem: modern processors perform speculative execution. To maximize performance, they try to execute instructions even before it is certain that those instructions need to be executed. For example, the processors will guess at which way a branch will be taken and execute instructions on the basis of that guess. If the guess is correct, great; the processor got some work done without having to wait to see if the branch was taken or not. If the guess is wrong, no big deal; the results are discarded and the processor resumes executing the correct side of the branch.

While this speculative execution does not alter program behavior at all, the Spectre and Meltdown research demonstrates that it perturbs the processor’s state in detectable ways. This perturbation can be detected by carefully measuring how long it takes to perform certain operations. Using these timings, it’s possible for one process to infer properties of data belonging to another process—or even the operating system kernel or virtual machine hypervisor.

This information leakage can be used directly; for example, a malicious JavaScript in a browser could steal passwords stored in the browser. It can also be used in tandem with other security flaws to increase their impact. Information leakage tends to undermine protections such as ASLR (address space layout randomization), so these flaws may enable effective exploitation of buffer overflows.

Meltdown, applicable to virtually every Intel chip made for many years, along with certain high-performance ARM designs, is the easier to exploit and enables any user program to read vast tracts of kernel data. The good news, such as it is, is that Meltdown also appears easier to robustly guard against. The flaw depends on the way that operating systems share memory between user programs and the kernel, and the solution—albeit a solution that carries some performance penalty—is to put an end to that sharing.

Spectre, applicable to chips from Intel, AMD, and ARM, and probably every other processor on the market that offers speculative execution, too, is more subtle. It encompasses a trick testing array bounds to read memory within a single process, which can be used to attack the integrity of virtual machines and sandboxes, and cross-process attacks using the processor’s branch predictors (the hardware that guess which side of a branch is taken and hence controls the speculative execution). Systemic fixes for some aspects of Spectre appear to have been developed, but protecting against the whole range of fixes will require modification (or at least recompilation) of at-risk programs.

Intel

Intel

So, onto the responses. Intel is the company most significantly affected by these problems. Spectre hits everyone, but Meltdown only hits Intel and ARM. Moreover, it only hits the highest performance ARM designs. For Intel, virtually every chip made for the last five, ten, and possibly even 20 years is vulnerable to Meltdown.

The company’s initial statement, produced on Wednesday, was a masterpiece of obfuscation. It contains many statements that are technically true—for example, “these exploits do not have the potential to corrupt, modify, or delete data”—but utterly beside the point. Nobody claimed otherwise! The statement doesn’t distinguish between Meltdown—a flaw that Intel’s biggest competitor, AMD, appears to have dodged—and Spectre and, hence, fails to demonstrate the unequal impact on the different company’s products.

Follow-up material from Intel has been rather better. In particular, this whitepaper describing mitigation techniques and future processor changes to introduce anti-Spectre features appears sensible and accurate.

For the Spectre array bounds problem, Intel recommends inserting a serializing instruction (lfence is Intel’s choice, though there are others) in code between testing array bounds and accessing the array. Serializing instructions prevent speculation: every instruction that appears before the serializing instruction must be completed after the serializing instruction can begin to execute. In this case, it means that the test of the array bounds must have been definitively calculated before the array is ever accessed; no speculative access to the array that assumes that the tests succeed are allowed.

Less clear is where these serializing instructions should be added. Intel says that heuristics can be developed to figure out the best places in a program to include them but warns that they probably shouldn’t be used with every single array bounds test; the loss of speculative execution imposes too high a penalty. One imagines that perhaps array bounds that come from user data should be serialized and others left unaltered. This difficulty underscores the complexity of Spectre.

For the Spectre branch prediction attack, Intel is going to add new capabilities to its processors to alter the behavior of branch prediction. Interestingly, some existing processors that are already in customer systems are going to have these capabilities retrofitted via a microcode update. Future generation processors will also include the capabilities, with Intel promising a lower performance impact. There are three new capabilities in total: one to “restrict” certain kinds of branch prediction, one to prevent one HyperThread from influencing the branch predictor of the other HyperThread on the same core, and one to act as a kind of branch prediction “barrier” that prevents branches before the “barrier” from influencing branches after the barrier.

These new restrictions will need to be supported and used by operating systems; they won’t be available to individual applications. Some systems appear to already have the microcode update; everyone else will have to wait for their system vendors to get their act together.

The ability to add this capability with a microcode update is interesting, and it suggests that the processors already had the ability to restrict or invalidate the branch predictor in some way—it was just never publicly documented or enabled. The capability likely exists for testing purposes.

Intel also suggests a way of representing certain branches in code with “return” instructions. Patches to enable this have already been contributed to the gcc compiler. Return instructions don’t get branch predicted in the same way so aren’t susceptible to the same information leak. However, it appears that they’re not completely immune to branch predictor influence; a microcode update for Broadwell processors or newer is required to make this transformation a robust protection.

This approach would require every vulnerable application, operating system, and hypervisor to be recompiled.

For Meltdown, Intel is recommending the operating system level fix that first sparked interest and intrigue late last year. The company also says that future processors will contain some unspecified mitigation for the problem.

AMD

AMD’s response has a lot less detail. AMD’s chips aren’t believed susceptible to the Meltdown flaw at all. The company also says (vaguely) that it should be less susceptible to the branch prediction attack.

The array bounds problem has, however, been demonstrated on AMD systems, and for that, AMD is suggesting a very different solution from that of Intel: specifically, operating system patches. It’s not clear what these might be—while Intel released awful PR, it also produced a good whitepaper, whereas AMD so far has only offered PR—and the fact that it contradicts both Intel (and, as we’ll see later, ARM’s) response is very peculiar.

AMD’s behavior before this all went public was also rather suspect. AMD, like the other important companies in this field, was contacted privately by the researchers, and the intent was to keep all the details private until a coordinated release next week, in a bid to maximize the deployment of patches before revealing the problems. Generally that private contact is made on the condition that any embargo or non-disclosure agreement is honored.

It’s true that AMD didn’t actually reveal the details of the flaw before the embargo was up, but one of the company’s developers came very close. Just after Christmas, an AMD developer contributed a Linux patch that excluded AMD chips from the Meltdown mitigation. In the note with that patch, the developer wrote, “The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.”

It was this specific information—that the flaw involved speculative attempts to access kernel data from user programs—that arguably led to researchers figuring out what the problem was. The message narrowed the search considerably, outlining the precise conditions required to trigger the flaw.

For a company operating under an embargo, with many different players attempting to synchronize and coordinate their updates, patches, whitepapers, and other information, this was a deeply unhelpful act. While there are certainly those in the security community that oppose this kind of information embargo and prefer to reveal any and all information at the earliest opportunity, given the rest of the industry’s approach to these flaws, AMD’s action seems, at the least, reckless.

ARM

The inside of the ExoKey, with its Atmel ARM-based CPU.
Enlarge /

The inside of the ExoKey, with its Atmel ARM-based CPU.

ARM’s response was the gold standard. Lots of technical detail in a whitepaper, but ARM chose to let that stand alone, without the misleading PR of Intel or the vague imprecision of AMD.

For the array bounds attack, ARM is introducing a new instruction that provides a speculation barrier; similar to Intel’s serializing instructions, the new ARM instruction should be inserted between the test of array bounds and the array access itself. ARM even provides sample code to show this.

ARM doesn’t have a generic approach for solving the branch prediction attack, and, unlike Intel, it doesn’t appear to be developing any immediate solution. However, the company notes that many of its chips already have systems in place for invalidating or temporarily disabling the branch predictor and that operating systems should use that.

ARM’s very latest high-performance design, the Cortex A-75, is also vulnerable to Meltdown attacks. The solution proposed is the same as Intel suggests, and the same that Linux, Windows, and macOS are known to have implemented: change the memory mapping so that kernel memory mappings are no longer shared with user processes. ARM engineers have contributed patches to Linux to implement this for ARM chips.

from Ars Technica http://ift.tt/2CZjuy0
via IFTTT