A Code-Obsessed Novelist Builds a Writing Bot. The Plot Thickens

https://www.wired.com/story/code-obsessed-novelist-builds-writing-bot-the-plot-thickens


A carven image of Ganesha, the elephant-headed Hindu god who is known as both “the remover of obstacles” and the patron of poetry, greets visitors from the front door of the Craftsman-style home in north Oakland, just a few houses south of the Berkeley border, that Chandra shares with his wife, Melanie Abrams (also a novelist, also a creative writing teacher at Berkeley), and his two daughters.

The word granthika is a Sanskrit noun that means “narrator, relator” or “one who understands the joints or division of time.” It is closely related to another noun, grantha, which means “an artificial arrangement of words, verse, composition, treatise, literary production, book in prose or verse, text,” and the root stem granth, which means “to fasten, tie or string together, arrange, connect in a regular series, to string words together, compose (a literary work.)”

But what Granthika is really intended to be is the remover of obstacles that hinder the stringing together of an artificial arrangement of words in a harmonious, meaningful fashion. The core challenge of this goal is that it knocked heads with one of the most stubborn problems in computer science—teaching a machine to understand what words mean. The design document describing Granthika that Chandra wrote in airports and hotels while on tour for Geek Sublime called for a “reimagining of text.” But that’s easier written than done.

“I discovered that attaching knowledge to text is actually a pretty hard problem,” Chandra says.

Computer scientists have been trying to slice this Gordian knot for decades. Efforts like the Text Encoding Initiative and Semantic Web ended up loading documents with so many tags aiming to explain the purpose and function of each word that the superstructure of analysis became overwhelmingly top heavy. It was as if you were inventing an entirely new language just to translate an existing language. Software applications built on top of these systems, says Chandra, were “difficult and fragile to use.”

One sleepless night, Chandra had an epiphany. He realized, he says, that the key to representing text and semantics in a way that avoided the problems of the traditional approaches lay in treating text as a “hypergraph.”

With traditional graphs, Chandra says, diverting into mathematical terrain that most of the writers who use Granthika will likely never dare enter, “you only have attachments between one node and the next and the next. But a hypergraph can point to many objects, many nodes.” A hypergraph approach would, he realized, enable a organizational system that illuminated multiple connections between people, places, and things, without getting bogged down in efforts to define the essential meaning of each element. The goal of processing a text document into a multi-nodal hypergraph of connections became Granthika’s central operating principle.

The underlying software is built on an adaptation of an open source database program called HypergraphDB, created by a Montreal-based programmer, Borislav Iordanov. Chandra first encountered Iordanov’s work when he started Googling around to see if any existing software fit the description of what he had conceived in his head. Chandra emailed Iordanov some technical questions; Iordanov responded by asking him what it was, exactly, that he wanted to do, and ended up so intrigued by Chandra’s answers that he joined the nascent project.

So how does it work, practically? In version one of Granthika, which launched in November, writers engage in a running dialogue with the software. The writer tells Granthika that so-and-so is a “character,” that such-and-such is an “event,” that this event happened at this time or at this location with this character, and so on. This becomes the rule set, the timeline, the who-what-where-when-how.

Behind the scenes, under the surface of the document, Granthika is a database of connecting links between these text objects. If, in the middle of the creative process, the writer wants to review a particular character’s trajectory, she can click on that character’s name and go directly to a timeline of all the events or scenes that that character is involved with.

“So I’m writing a novel,” Chandra says, “and I’m mentioning a character on page 416 and she is a minor character that I last mentioned on page 80. Previously, to know about that character I have to open up my note-taking program and then search through the notes. With Granthika, I can press one key stroke and go to her page, as it were, and see all my notes about her and hopefully soon pictures that I’ve attached, and so on.”

The breakthrough is that the computer doesn’t have to understand at any sentient level who the character is, it just has to know what things that character is connected to.

Creating a hypergraph database that links multiple elements in a novel like Sacred Games is a process-intensive computing task that Iordanov says wouldn’t have been possible until relatively recently. It is also a realization of what some of the earliest observers of electronic text theorized was a crucially defining aspect of computer-mediated, globally networked technology—the new ability to meaningfully link things together.

via Wired Top Stories https://ift.tt/2uc60ci

February 6, 2020 at 06:09AM

A $1 billion initiative aims to bring EV chargers to highways and rural areas

https://www.engadget.com/2020/02/06/chargepoint-natso-ev-chargers-highways-rural-areas/

While Tesla, Electrify America and others technically have nationwide EV charging networks, they don’t really provide full coverage — many rural areas are far from any kind of charger infrastructure. ChargePoint believes it can help close that gap, though. It’s teaming with NATSO on a $1 billion effort to bring EV chargers to over 4,000 travel centers and truck stops (which NATSO represents) by 2030, with a particular focus on highways and rural areas. This could both spur EV adoption in rural towns and help with long-distance travel for everyone, ChargePoint said.

The two allies hope to make use of both "public and private" cash to support their initiative, including Volkswagen settlement funds.

There’s not much mystery as to why ChargePoint and NATSO are willing to spend on EV chargers. ChargePoint could corner an underserved market and reap the rewards if and when electric cars dominate. For NATSO, meanwhile, this may be a matter of survival. Many travel centers and truck stops are built on the assumption drivers are stopping for gas — they could lose much of their business if people have few good reasons to make pit stops.

Source: ChargePoint

via Engadget http://www.engadget.com

February 6, 2020 at 09:42AM

Facebook and Venmo demand Clearview AI stops scraping their data

https://www.engadget.com/2020/02/06/facebook-venmo-cease-and-desist-clearview-ai/

Following Google and Twitter, Facebook has become the latest company to take legal action against controversial facial recognition startup Clearview AI. According to Buzzfeed News, the company sent a cease-and-desist letter to Clearview sometime this week, demanding that it stop taking data from Facebook and Instagram. "Scraping people’s information violates our policies, which is why we’ve demanded that Clearview stop accessing or using information from Facebook or Instagram," a spokesperson for the company told Buzzfeed News.

Before sending the letter, it appears Facebook had tried several different approaches to get Clearview to comply. According to CBS News, the social media giant sent multiple letters to Clearview, attempting to clarify its policies. CBS News says the company had also requested detailed information from Clearview about its practices and demanded that the startup stop using data from Facebook’s products.

Clearview came under intense scrutiny earlier this year when a report from The New York Times showed that the company has been scraping billions of images from websites like Facebook, Twitter and YouTube without consent to build out its facial recognition database. The startup works with more than 600 police departments across North American and claims its technology is 99.6 percent accurate in identifying individuals.

It’s unclear why Facebook took longer to take formal legal action against Clearview than other companies such as Google. One possible reason is that Facebook board member Peter Thiel was an early investor in the startup. The fact that Thiel has a relationship with both companies may have complicated Facebook’s response. In either case, we’ve reached out to Facebook for comment, and we’ll update this article when we hear back from the company.

CBS News reports Venmo recently sent a cease-and-desist letter to Clearview as well. In an interview with the broadcaster, Clearview CEO Hoan Ton-That said his company would challenge the letters by arguing it has a First Amendment right to publicly available information. "Google can pull in information from all different websites," he said, comparing Clearview’s product to Google’s search engine. "So if it’s public, you know, and it’s out there, it could be inside Google search engine, it can be inside ours as well." In a statement it issued this week, Google said Ton-That’s comparison was misleading. "Most websites want to be included in Google Search, and we give webmasters control over what information from their site is included in our search results, including the option to opt-out entirely," the company said.

Complicating the entire situation is that there aren’t any federal laws that regulate the use of facial recognition in the US. Some cities such as San Francisco have enacted partial bans of the technology, but there’s no consensus between different cities and states.

Source: CBS News, Buzzfeed News

via Engadget http://www.engadget.com

February 6, 2020 at 09:55AM

At Astra Space, failure is an option

https://arstechnica.com/?p=1651331

  • Astra Space performs a test on its rocket outside its headquarters in Alameda, Calif.

    Astra Space

  • An overview of the company’s factory. Lunch tables at right, team offices in the middle, manufacturing at left. In the back, you can see rockets being assembled.

    Astra Space

  • Three Rocket 3.0s in the Alameda factory.

    Astra Space

  • An engine test in one of the indoor test facilities.

    Astra Space

  • The full rocket is brought outside the testing area at Alameda, Calif.

    Astra Space

  • A view inside a flight control room.

    Astra Space

  • A closer view of the rocket during a pressurization test.

    Astra Space

  • A view of a nearly complete rocket inside the factory.

    Astra Space

  • Here are three copies of Rocket 3.0 in various states of readiness.

    Astra Space

ALAMEDA, Calif.—Toward the end of an interview on Tuesday morning, Adam London finally came out and admitted it. “Honestly, we’re building a pretty boring rocket.”

This is by design, of course. His company Astra Space, which just emerged from stealth mode this week, does not want to build the sleekest or most modern of rockets. Rather, says London (the company’s co-founder and chief technology officer), Astra seeks to deliver the most bang for the buck to customers. To that end, Astra has developed a no-frills launch system. Even the company’s name for its newest rocket, “Rocket 3.0,” lacks pizzazz.

“This is not about making the best, most sexy rocket,” London said in a small conference room at the company’s headquarters in Alameda, California. “We want to make the simplest, most manufacturable rocket.”

Over the last five years, dozens of startups have emerged in the United States and around the world with flashy plans to build low-cost rockets to meet this rising demand for small satellite launch. Some are more credible and well-funded than others. But among that crowd, Astra Space currently stands out for several reasons: They are moving fast, aim to be insanely cheap, and are rigorously following an iterative design process. Perhaps most importantly, they’re willing to fail.

When London and co-founder Chris Kemp started Astra Space in late 2016, the pair imagined two fundamental pathways to get stuff into low-Earth orbit cheaply. SpaceX has already very nearly perfected one of them by building the capable Falcon 9 rocket, which is highly reliable, brawny, and increasingly reusable.”It is an incredible engineering achievement,” London said of SpaceX’s workhorse rocket. “I remain in awe of what they have done.” For highly valuable large satellites and astronaut launches, the Falcon 9 provides an optimal solution.

But the Astra team believes there is another path, too. Over the last decade, a slew of new space startups and traditional players have begun to build smaller satellites, and these companies are looking for ever-cheaper rides into space and specific orbits. These 50 to 150-kg satellites are, London says, almost disposable. Most lack extensive propulsion systems, and therefore they’ll only have a design lifetime of a few years before they get dragged back into Earth’s atmosphere. Astra Space has crafted what it sees as a solution for this, a rocket neither exquisite nor perfect. “We’re actually not shooting for 100 percent reliability,” London said. Instead, Astra is willing to trade a small amount of reliability for a big cost savings.

And it’s betting customers will as well.

Fast

London and Kemp met about five years ago, following an introduction by Robbie Schingler, co-founder of the satellite company Planet. As a former Chief Technology Officer at NASA, entrepreneur, and gifted fundraiser, Kemp was taken by London’s engineering work designing very small rockets. Under one grant from the US Defense Advanced Research Projects Agency (DARPA), London had designed a rocket to launch a single 3U CubeSat, weighing less than 5kg, into orbit. The rocket, which never launched, had a diameter of less than half a meter.

The more London and Kemp talked, the more they liked the idea of building a rocket using some of the technologies that London had honed over a decade at his small start-up, Ventions. These included an electric pump to pressurize rocket fuel before it enters the engine chamber, a lower-weight alternative to a turbopump. When the pair founded Astra Space in October 2016, London brought over about 10 employees from Ventions and some preliminary rocket engine concepts.

From there, Astra moved quickly, designing the first version of its booster, called Rocket 1.0, throughout 2017. At the same time, the team modified a test site literally next door at the former Alameda Naval Air Station, which had two large tunnels for jet engine testing. Here, throughout that first full year, they would perform hundreds of first-stage engine tests indoors.

  • This gallery shows scenes from the company’s launch facility in Kodiak, Alaska.

    Astra Space

  • Here is a view of the launch of Rocket 1.0.

    Astra Space

  • Another view of the launch of Rocket 1.0 in April, 2018.

    Astra Space

  • The business end of Rocket 1.0.

    Astra Space

  • Rocket 2.0 launches in November, 2018.

    Astra Space

  • Overview of the spaceport.

    Astra Space

  • It’s pretty up there.

    Astra Space

  • But cold.

    Astra Space

By the spring of 2018, the company was ready to launch its first rocket, which included five first-stage engines and a chunk of metal for the second stage. This rocket was never designed to reach orbit from its launch site at the Pacific Spaceport Complex in southern Alaska, on Kodiak Island. In fact, due to some components used, the first stage engines were only capable of firing for about 60 seconds. Kemp said this first flight’s primary goal was to not hurt anyone and, secondarily, to hopefully not destroy the launch site. The rocket ended up launching and performing reasonably well for its minute-long mission.

This experience gave the team confidence to refine its design for Rocket 2.0, which was developed during the summer of 2018 and launched in November of that year. This rocket had more components of a second stage, but it still lacked an engine, so it also could not put a payload into orbit. However, Astra hoped the rocket’s first stage would fire long enough for the rocket to breach the Kármán line, the internationally designated boundary of space 100km above the Earth’s surface. Disappointingly, due to an issue with a “speed controller,” the rocket did not make it that far, terminating flight early. Even so, Kemp said the mission met about 75 percent of its overall objectives.

Astra, which now has 170 employees, spent the entirety of 2019 designing and building Rocket 3.0. Kemp and London do intend for this version to reach orbit, and they have made significant changes to the overall design accordingly. Notably, London doubled the performance of the first stage engine, named Delphin after a Greek sea God, from 3,000 pounds of thrust at sea level to 6,000 pounds. (The upper stage engine is named Aether, after the pure “upper sky” air breathed by Greek gods.) Engineers also overhauled the avionics, switched to a “common dome” design between the liquid oxygen and kerosene propellant tanks, and more.

If all goes well, the first Rocket 3.0 will launch within “single digit weeks” from Alaska. The actual date will be determined by DARPA as part of its Launch Challenge to support rapid, reliable launch capabilities. Of the 18 teams that originally entered the contest—including a major industry player Virgin Orbit and a now-bankrupt Vector—only Astra still has a chance to win the $12 million prize. Kemp said the first Rocket 3.0 has already completed a static fire test at a site south of Sacramento, the former Castle Air Force Base, but the rocket has not yet been shipped to Alaska. (The company plans to launch polar missions from Alaska. It also will likely lease a site in the Kwajalein Atoll from the US Army for equatorial and mid-latitude inclinations).

If Astra makes it to orbit this year, it would do so remarkably fast for a private company developing a new, liquid-fueled rocket. SpaceX holds the current record, taking six years and four months from its founding to reaching orbit with its Falcon 1 rocket. Rocket Lab, the other startup with an orbital rocket, required more than 11 years. Other companies that may make orbital launch attempts this year include Virgin Orbit (founded December 2012) and Firefly (January 2014). Launching successfully any time before October would mean that Astra reached space in less than four years.

Listing image by Astra Space

via Ars Technica https://arstechnica.com

February 6, 2020 at 07:06AM

Someone used neural networks to upscale a famous 1896 video to 4k quality

https://arstechnica.com/?p=1651282

Arrival of a Train at La Ciotat is one of the most famous films in cinema history. Shot by French filmmakers Auguste and Louis Lumière, it achieved an unprecedented level of quality for its time. Some people regard its commercial exhibition in 1896 as the birth of the film industry. An urban legend—likely apocryphal—says that viewers found the footage so realistic that they screamed and ran to the back of the room as the train approached. I’ve embedded a video of the original film above.

Of course, humanity’s standards for realism have risen dramatically over the last 125 years. Today, the Lumière brothers’ masterpiece looks grainy, murky, and basically ancient. But a man named Denis Shiryaev used modern machine-learning techniques to upscale the classic film to 21st-century video standards.

The result is remarkable. Watching the upscaled version makes the world of our great-great-great-grandparents come to life. Formerly murky details of the train, the clothing, and the faces of the passengers now stand out clearly.

How did Shiryaev do it? He says he used commercial image-editing software called Gigapixel AI. Created by Topaz Labs, the package allows customers to upscale images by up to 600 percent. Using sophisticated neural networks, Gigapixel AI adds realistic details into an image to avoid making it look blurry as it’s scaled up.

As the name implies, neural networks are networks of neurons—mathematical functions that transform a set of input values into an output value. The key feature of neural networks is that they can be trained: if you have a bunch of example inputs whose “correct” outputs are known, you can tune the parameters of the network to make it more likely to produce correct answers. The hope is that this training will generalize—that once you’ve trained it to produce the right answer for inputs the network has seen before, it will also produce good answers for inputs it hasn’t seen, too.

To train a network, you need to have a database of examples where the right answer is already known. Sometimes AI researchers have to hire human beings to produce these right answers by hand. But for a image upscaling, there’s a convenient shortcut: you start with high-resolution images and downsample them. The low-resolution images become your inputs and the high-resolution originals serve as the “correct” answer the network is aiming to produce.

“A neural network analyzes thousands of photo pairs to learn how details usually get lost,” Topaz Labs explains on their product page for Gigapixel AI. “The algorithm learns to ‘fill in’ information in new images based on what it has learned, effectively adding new detail to your photo.”

Show the neural network a low-resolution image of a face and it will figure out that it’s a face and fill in the right details for the subject’s eyes, nose, and mouth. Show the neural network a low-resolution brick building and it will add a suitable brick pattern in the high-res version.

Timothy B. Lee / Colorize Images / Denis Shiryaev

An obvious next step would be to colorize the video. Neural networks can do that, too, using the same basic technique: start with a bunch of color photos, convert them to black and white, and then train a neural network to reconstruct the color originals.

I dropped a frame from Shiryaev’s video into the Colorize Images app for Android, which uses machine learning to automatically colorize images. As you can see, it does a pretty good job, correctly inferring that trees should be green, gravel should be a brownish color, and that men’s coats should be black. I would love to see someone with more time and better tools colorize Shiryaev’s upscaled version of the Lumière Brothers’ classic.

via Ars Technica https://arstechnica.com

February 4, 2020 at 05:08PM

Physicists determine the optimal soap recipe for blowing gigantic bubbles

https://arstechnica.com/?p=1651094

Two grown men blow giant bubbles on a lawn.
Enlarge /

Physicist Justin Burton (left) experiments with giant soap bubbles on Emory University’s Quad with graduate student Stephen Frazier.

Everybody loves bubbles, regardless of age—the bigger the better. But to blow really big, world-record-scale bubbles requires a very precise bubble mixture. Physicists have determined that a key ingredient is mixing in polymers of varying strand lengths, according to a new paper in Physical Review Fluids. That produces a soap film able to stretch sufficiently thin to make a giant bubble without breaking.

Bubbles may seem frivolous, but there is some complex underlying physics, and hence their study has long been serious science. In the 1800s, Belgian physicist Joseph Plateau outlined four basic laws of surface tension that determine the structure of soapy films. Surface tension is why bubbles are round; that shape has the least surface area for a given volume, so it requires the least energy to maintain. Over time, that shape will start to look more like a soccer ball than a perfect sphere as gravity pulls the liquid downward (“coarsening”).

Bubbles and foams remain an active area of research. For instance, in 2016, French physicists worked out a theoretical model for the exact mechanism for how soap bubbles form when jets of air hit a soapy film. They found that bubbles only formed above a certain speed, which in turn depends on the width of the jet of air. If the jet is wide, there will be a lower threshold for forming bubbles, and those bubbles will be larger than ones produced by narrower jets, which have higher speed thresholds. That’s what’s happening, physics-wise, when we blow bubbles through a little plastic wand: the jet forms at our lips and is wider than the soapy film suspended within the wand.

In 2018, we reported on how mathematicians at New York University’s Applied Math Lab had fine-tuned the method for blowing the perfect bubble even further based on similar experiments with soapy thin films. They concluded that it’s best to use a circular wand with a 1.5-inch perimeter and gently blow at a consistent 6.9cm/s. Blow at higher speeds and the bubble will burst. Use a smaller or larger wand, and the same thing will happen.

But what about blowing gigantic bubbles or long, thin soap films that can span two stories? Justin Burton, co-author of the latest paper and a physicist at Emory University specializing in fluid dynamics, first got intrigued by the topic at a conference in Barcelona. He saw street performers producing giant bubbles about the diameter of a hula hoop and as long as a car.

He was especially intrigued by the shifting rainbow of colors on the bubbles’ surface. This effect is due to interference patterns, created when light reflects off the two surfaces of the film. For Burton, this was also an indication that the thickness of the soap was just a few microns, roughly equivalent to the wavelength of light. He was surprised that a soap film could remain intact when stretched so thin into a giant bubble and started doing his own experiments, both in the lab and his own backyard.

While perusing the open access Soap Bubble Wiki, he noticed that most of the favored recipes for bubble solution included a polymer—usually natural guar (a common thickening food additive) or a medical lubricant (polyethylene glycol).

Using those recipes as a guide, “We basically started making bubbles and popping them, and recorded the speed and dynamics of that process,” said Burton. “Focusing on a fluid at its most violent moments can tell you a lot about its underlying physics.”

The ultimate goal was to determine the perfect proportions for a bubble mixture to produce gigantic bubbles: something with a bit of stretch, but not too much, where the fluid flows a little, but not too much—in other words, the Goldilocks of bubble mixtures.

As Lissie Connors writes at Physics Buzz:

For their experiment, the researchers created various mixes of water, soap, and long-chain polymers to make their bubbles. Unfortunately, blowing a 100 m3 bubble is a poor use of lab space, and quite difficult to measure accurately, so the soap films were created using a cotton string, and the thickness was measured using infrared light. In addition to measuring the thickness, they also tracked the lifetime of each film.

Burton and his team concluded that it was the polymeric strands that were the key to producing giant bubbles, confirming the collective online wisdom. “The polymer strands become entangled, something like a hairball, forming longer strands that don’t want to break apart,” said Burton. “In the right combination, a polymer allows a soap film to reach a ‘sweet spot’ that’s viscous but also stretchy—just not so stretchy that it rips apart.”

The team also found that varying the length of the polymer strands resulted in a sturdier soap film. “Polymers of different sizes become even more entangled than single-sized polymers, strengthening the elasticity of the film,” said Burton. “That’s a fundamental physics discovery.”

You can find Burton’s giant bubble recipe in the sidebar. But be forewarned: there are some factors that can’t be controlled in a real-world setting (as opposed to Burton’s laboratory environment), like humidity levels.

DOI: Physical Review Fluids, 2020. 10.1103/PhysRevFluids.5.013304 (About DOIs).

via Ars Technica https://arstechnica.com

February 4, 2020 at 06:39PM

Decades of U.S. air quality improvements may be slowing, and these areas have it the worst

https://www.popsci.com/story/environment/air-pollution-gains-slow-report-2018/

Smog over LA used to be commonplace in the '90s

Smog over LA used to be commonplace in the ’90s (steagnes06/Flickr/)

For decades, America has made progress on air quality. With emission regulations and advances in clean air technologies, the days of smog so thick it burned your eyes and lungs are virtually over.

But even with our gains, air pollution still contributes to one in every 25 early deaths. And our progress seems to be leveling off. Last week, a new report by U.S. PIRG Education Fund and Environment America Research & Policy Center found that in 2018, one-third of Americans lived in places with more than 100 days of degraded air quality. That’s 108 million people breathing polluted air for over three months—and 35 million more than than a similar report for 2016. “We focused on 100 days because it’s just unacceptable that for more than three months these communities were exposed to such bad air pollution,” says coauthor Morgan Folger, the clean cars campaign director at Environment America Research & Policy Center. “No one should even experience one day.”

The pollutants: ozone and PM2.5

The report focused on ground-level ozone and fine particulate matter (PM2.5). While ozone in the upper atmosphere blocks harmful ultraviolet rays, the same gas lower down irritates our lungs. Nitrogen oxides and volatile organic compounds, released from tailpipes and fossil fuel power plants, react with heat and sunlight in the atmosphere to form ozone. That’s why on hot, windless days, cities are the most hazy—ozone forms the visible smog that many city dwellers are familiar with.

Most recent data available are from 2018

Most recent data available are from 2018 (Infographic by Sara Chodosh/)

Fine particulate matter, sometimes just called soot, is made up of tiny particles less than 2.5 micrometers in diameter. The particles include organic compounds, combustion particles, and metals. PM2.5 also comes from burning fossil fuels—especially diesel-powered trucks—and other sources like brake pads and wildfires.

These powerful pollutants can contribute to respiratory illnesses, mental health conditions, and cancer, and have been tied to many other conditions. Children growing up breathing polluted air are vulnerable to impaired lung development and long term function. Pregnant women and the elderly are also especially susceptible.

The most polluted regions

For the recent analysis, Folger and her team used EPA data collected at air quality sensors installed across the United States. To see how the pollution levels overlapped with population, they used census data. As for what constitutes “degraded air quality,” the report uses concentrations that the EPA describes as “moderate” air quality. Folger says this level is the point at which those more sensitive to air pollution—including people with respiratory illnesses or children—begin to experience health effects.

The report found that 89 urban areas and 12 rural counties had more than 100 days of polluted air in 2018. An additional 157 millions Americans experienced 31 to 100 days of impaired air. And every state, even Alaska and Hawaii, had regions that suffered at least a month of diminished air quality. “I think it is really surprising that there were way more metropolitan areas and so many more people [impacted by pollution] this year than there were in 2016,” says Folger. “It kind of shocks you—a third of the population could be impacted by this.”

Many places that had a problem with one pollutant also had elevated concentrations of the other. However, there’s some variation. Particulate matter has a lot of different sources, explains Folger. High levels in western states could be due to the rise of catastrophic wildfires in recent years. In other states, proximity to coal-fired power plants might instead determine how much soot pollution there is.

Most recent data available are from 2018

Most recent data available are from 2018 (Infographic by Sara Chodosh/)

Is air quality going backwards?

Don’t freak out and stay indoors just yet. “There are a lot of days that might be in the moderate range, but the vast majority of people really are not impacted by pollution at that level,” says Kevin Cromar, director of the air quality program at New York University’s Marron Institute of Urban Management, who was not involved with the report. Pollution levels at the lower end of what the EPA considers “moderate” are actually “pretty good air quality” for most people, says Cromar. However, he says there are still many health gains that we could make by addressing our pollution.

In the big picture, we’ve had immense progress in air quality, says Jason West, an atmospheric scientist at the University of North Carolina who was not involved with the report. And that’s largely thanks to the regulations we’ve enacted, such as the 1970 Clean Air Act. A 2018 study found that air quality improvements between 1990 and 2010 avoided approximately 35,800 particulate matter-caused deaths and 4,600 ozone-caused deaths.

But there are signs that this positive trend might be slowing down. “2018 had more days of pollution than each of the previous five years,” the authors of the new pollution report conclude. The American Lung Association’s most recent “State of the Air” report found a similar trend; between 2015 and 2017, more cities suffered from days of highly polluted air than between 2014 and 2016. Another study by researchers at Carnegie Mellon University reached a similar conclusion: while particulate matter pollution went down nationwide by 24 percent from 2009 to 2016, it rose by 5 percent from 2016 to 2018.

And while particulate matter is overall down from the previous decade, the same isn’t true for ozone. “In ozone, we haven’t seen that level of improvement,” says Cromar. “It’s remaining stubbornly high in most parts of the US.” As a recent report on air pollution-related deaths led by Cromar found, while the deaths attributable to particulate matter have gone down since 2008, the health impact of ozone pollution has remained about the same. And as populations in polluted areas grow, that means more people will be breathing that same level of ozone, leading to more pollution-related deaths.

The biggest polluters

Transportation—including passenger vehicles and shipping trucks—is a major contributor of air pollution. Vehicles burning gas and diesel release nitrogen oxides and volatile organic compounds into the air, which can form smog or contribute to particulate matter. While vehicles have become cleaner, some of those gains are offset by more people driving more miles every year. Other contributors include the fossil fuels we burn for power, smoke from wildfires, and industrial processes like chemical manufacturing.

As for the increase in air pollution in the past couple years, it’s impossible to say for sure what’s at play. West says that natural variability—such as in weather patterns, which can both disperse and concentrate pollutants—can make it hard to tell if there is actually an upward trend in emissions.

Folger has some suspicions. Driving miles are going up, and in the western states there’s been an increase in wildfires. She also says that the EPA has been lax on enforcing their own rules by not taking action when states are exceeding pollutant thresholds.

To make matters worse, climate change is expected to further reduce air quality. Increased temperatures speed up the reaction that forms ozone. Also, water-stressed plants release organic compounds that can also add to pollution under climate change. “We’ve seen 19 of the hottest years on record in the past two decades,” says Folger. “We’re definitely experiencing warmer and warmer days—that’ll mean that air pollution gets worse.” A 2017 study in Nature Climate Change estimated that, with our an emissions trajectory like the one we’re on now, we’ll have an additional 43,600 ozone-caused deaths and 215,000 PM2.5-caused deaths in 2100.

We need global climate action to protect local air quality

While climate change threatens to make air quality even worse, the flip side is that addressing the climate not only provides the benefits of slowing warming globally, but also improving air quality locally. Mandates for zero emissions vehicles, improved public transportation, and transitioning to renewable energy all achieve these dual goals. “If we take action to reduce greenhouse gas emissions, we reduce air pollution at the same time,” says West. “The air pollution benefits we would see would be immediate and local to where those actions take place.”

But the current administration is moving policy in the opposite direction, from weakening carbon emission rules for power plants to stopping states from setting their own tailpipe standards. “Those efforts are going to lead to worse air quality and health impacts,” says Cromar. “We need to be vigilant and adopt policies that improve air quality.”

via Popular Science – New Technology, Science News, The Future Now https://www.popsci.com

February 4, 2020 at 12:37PM