Google Wanted to Prohibit Workers From Organizing by Email

https://www.wired.com/story/google-wanted-prohibit-workers-organizing-by-email


Two days before 20,000 Google employees temporarily walked off the job, CEO Sundar Pichai threw his support behind the protest, assuring employees that Google’s leadership backed their right to organize. But three weeks later, Google’s lawyers took a different stance, asking the US government to overturn Obama-era protections that supported employees’ right to organize using their work email,.

Google’s lawyers made the request to the National Labor Relations Board in November, as part of an ongoing case against Google, unrelated to the walkout, according to Bloomberg, which obtained case filings through a Freedom of Information Act request.

The Labor Board expanded employees’ right to use their workplace email to organize in 2014, as part of a case called Purple Communications that restricted employers from punishing workers who used their company email to circulate petitions, plan walkouts, or try to form a union.

Three weeks after the November walkout, Google’s lawyers urged the labor board to undo that precedent in a filing defending Google as part of the ongoing NLRB case. That wasn’t the first time Google’s attorneys had asked the board to roll back those protections; Bloomberg reports that the company made a similar request to the NLRB in May 2017.

Activists within Google see the move—coming so soon after Pichai’s message of support—as another sign of Google’s insincerity, particularly since walkout planners relied on an email list joined by more than 1,000 workers. The Nov. 1 walkout spanned dozens of Google offices around the globe.

Google did not immediately respond to request for comment from WIRED. In a statement to Bloomberg, a Google spokesperson said, “We’re not lobbying for changes to any rules.” The spokesperson told Bloomberg that Google’s argument that the Obama-era protections should be overturned was “a legal defense that we included as one of many possible defenses” against the NLRB’s allegations, which the company considers meritless.

“In an email to all of Google, Sundar assured us that he and Google’s leadership supported the Walkout. But the company’s requests to the National Labor Relations Board tell a different story, showing that Google would rather pay lawyers to change national labor law than do what’s right,” the walkout organizers wrote in a statement posted on Twitter. “If these protections are rolled back, Google will be complicit in limiting the rights of working people across the United States, not just us.”

In the Google case before the NLRB, board staff accused Google in 2017 of violating federal labor law by restricting worker rights and threatening employees. The complaint alleges that Google acted unlawfully in 2015, when the company issued a warning to an employee based on comments made via email on an internal Google+ forum “regarding workplace diversity and social justice initiatives, workplace policy viewpoints, and regarding employees’ rights to express their opinion on G+,” Bloomberg reports.


More Great WIRED Stories

via Wired Top Stories http://bit.ly/2uc60ci

January 24, 2019 at 02:09PM

DeepMind AI AlphaStar goes 10-1 against top ‘StarCraft II’ pros

https://www.engadget.com/2019/01/24/deepmind-ai-starcraft-ii-demonstration-tlo-mana/



Blizzard Entertainment

After laying waste to the best Go players in the world, DeepMind has moved on to computer games. The Google-owned artificial intelligence company has been fine-tuning its AI to take on StarCraft II and today showed off its first head-to-head matches against professional gamers. The AI agent, named AlphaStar, managed to pick up 10 wins against StarCraft II pros TLO and MaNa in two separate five-game series that originally took place back in December. After racking up 10 straight losses, the pros finally scored a win against the AI when MaNa took on AlphaStar in a live match streamed by Blizzard and DeepMind.

The pros and AlphaStar played their games on the map Catalyst using a slightly outdated version of StarCraft II that was designed to enable AI research. While TLO said during a stream that he felt confident he would be able to top the AI agent, AlphaStar managed to win all five games, unleashing completely unique strategies each time.

AlphaStar had a bit of an advantage going up against TLO. First, the match used the Protoss class of units, which is not TLO’s preferred race in the game. Additionally, AlphaStar sees the game in a different way than your average player. While it is still restricted in view by the fog of war, it essentially sees the map entirely zoomed out. That means it can process a bit of information about visible enemy units as well as its own base and doesn’t have to split its time to focus on different parts of the map the same way a human player would have to.

AlphaStar 'Starcraft II' vision

Still, AlphaStar didn’t benefit from the type of benefits that one might imagine an AI to have over a human. While TLO and MaNa are theoretically limited in how many clicks they can physically perform per minute in a way that an AI isn’t, AlphaStar actually performed fewer actions per minute than his human opponent and significantly fewer than the average pro player would use. The AI also had a reaction time of about 350 milliseconds, which is slower than most pros. While the AI took its time, it was able to make smarter and more efficient decisions that gave it an edge.

AlphaStar APM

AlphaStar’s expertise in the game comes primarily from an in-depth training program that DeepMind calls the AlphaStar League. DeepMind took a bunch of replays of human games and started training a neural network based on that data. That agent made of human data was forked to create new players and those competitors were matched up against one another in a series of matches. Those forks of the original data were encouraged to take on specialties and master different parts of the game to create unique game experiences.

The AlphaStar League ran for one week, with each of the matches producing new information that helped to refine the AI’s strategy. Over the course of that week, AlphaStar played the equivalent of 200 years worth of StarCraft II. By the end of the league session, DeepMind selected five individual agents that it determined had the least exploitable strategy and had the best chance to win. It tossed those five agents at TLO and pulled off a five-game sweep.

AlphaStar League

Seeing as the AI managed to top a pro using his off race, DeepMind decided to put AlphaStar up against a Protoss expert. For the matchup, DeepMind tapped MaNa — a two-time champion of major StarCraft II tournaments. AlphaStar got another week of training before the competition, including the knowledge gained from taking on a pro-level player in TLO. The commentators noted the AI played significantly more like a human in its matches, ditching some of its more erratic and unexpected actions while fine-tuning its decision making and style.

Just like TLO before him, MaNa put up a valiant effort but fell short in every match against the AlphaStar agents. The AI once again won all five of its matches against its human opponent, finishing 10-0 in its first 10 matches against professional players.

Following the broadcast of the recorded matches, DeepMind introduced a new version of AlphaStar that MaNa took on in a live match. The agent that played the live game didn’t have the benefit of the overhead camera and instead had to make decisions on where to place its focus in the same way a human would. DeepMind said within a week, AlphaStar was up to speed with the new view of the game, but didn’t have the opportunity to test the AI against a human pro before taking on MaNa live on the stream.

With the new restriction on AlphaStar’s view, MaNa was able to exploit some of the AI’s shortcomings and pulled out the win, dealing AlphaStar its first loss against pro players.

While AlphaStar’s immediate level of expertise and unmatchable pace of learning is bad news for any StarCraft pro thrown in its path, gamers may find some useful strategies that can be taken from the AI and its 200 years of accumulative knowledge about the game. The full set of replays of all of AlphaStar’s matches against TLO and MaNa are available on DeepMind’s website if you’ve like to study the strategies developed by the AI.

via Engadget http://www.engadget.com

January 24, 2019 at 02:27PM

Sleep Deprivation Causes Alzheimer’s Protein to Build up in the Brain

http://blogs.discovermagazine.com/d-brief/?p=31223

Our brains, like everything else about our bodies, change as we age. But the changes that happen to the Alzheimer’s brain are not part of the normal aging process. Researchers have struggled to understand the underlying mechanisms that lead some people to develop Alzheimer’s. Now a new study published in Science has linked poor sleep to an abnormal build-up of an Alzheimer’s-promoting protein in the brain.
The telltale pathological signs of Alzheimer’s disease are high levels of beta-amyl

via Discover Main Feed http://bit.ly/1dqgCKa

January 24, 2019 at 04:04PM

AMD 3rd Gen Ryzen Matisse CPU Benchmark Surfaces on 12-Core Processor

https://www.legitreviews.com/amd-3rd-gen-ryzen-matisse-cpu-benchmark-surfaces-on-12-core-processor_210316


Posted by

Nathan Kirsch |

Thu, Jan 24, 2019 – 10:48 AM

Twitter user TUM_APISAK recently discovered an AMD Engineering processor on UserBenchmark that might just be an AMD 3rd Gen Ryzen processor. The detailed performance results from this user run show an AMD Myrtle board running a single 12-core, 24-thread processor with a 3.4 GHz base clock and a 3.6 GHz average clock. Under the processor details it lists an AMD engineering sample is identified by model: 2D3212BGMCWH2_37/34_N and this is believed to be a Ryzen ‘Matisse’ 7nm CPU (check out the decoder).

3rd Gen Ryzen 12-core

The benchmark results showed that this 12-core, 24-thread AMD processor scored 116 points on the single-core test, 374 points on the quad-core test and ultimately 1,741 points on the multi-core test. The Intel Core i7-8700K 6-core, 12-thread processor averages around 138 points on the single-threaded test and 1,073 points on all available cores. The newer Intel Core i9-9900K 8-core, 16-thread processor scores 146 points on a single core and 1,503 points on all cores. The AMD Ryzen 7 2700X 8-core, 16-thread processor scored 126 points on one core and then 1,304 points on all cores. Keep in mind that this is just one run of this 3rd Gen Ryzen 12-core processor versus averages from over 227,000 runs on the 8700K, 13,000 runs on the 9900K and 61,000 runs on the 2700X!

3rd Gen Ryzen Memory Ladder

Other tests results from the same test run show some interesting bits of information. For example only one stick of DDR4 memory was shown to be used and that might limit some performance. The UserBenchmark System Memory Latency Ladder test shows the latency of L1, L2, and L3 caches and the decline in performance at 64MB could indicate this 3rd Gen Ryzen processor has at least 32MB of L3 cache.

Ryzen 3rd Generation

That said, the multi-core benchmark results are exciting to say the least and it shows that AMD is bringing more cores to the AM4 platform. Do we need 12-core, 24-thread processors in mainstream platforms? Maybe not, but software developers will eventually begin to take advantage of all the cores and then you will.

via Legit Reviews Hardware Articles http://bit.ly/2BUcaU4

January 24, 2019 at 10:48AM

Scientists Turned a Regular Fidget Spinner Into a Centrifuge That Separates Blood

https://gizmodo.com/scientists-turned-a-regular-fidget-spinner-into-a-centr-1831996597


Image: Drew Angerer (Getty Images)

Kids are already over fidget spinners after adults rapidly made them uncool, but a team of scientists in Taiwan has found a nifty way to repurpose these small toys. They turned them into inexpensive centrifuges that could allow health workers in impoverished areas to carry out certain blood tests with ease.

Centrifuges are used to separate out the main components of blood: plasma and blood cells. Plasma can be then tested to confirm health conditions like HIV, viral hepatitis, and malnutrition. This separation is usually done with relatively expensive, electrically powered devices that can spin fast and long enough to create the needed centrifugal force. But researchers at the National Taiwan University wondered if there was a low-tech way to accomplish the same thing. Fidget spinners, funnily enough, weren’t the first toy they tested out.

“In the beginning, we tried to use the Beyblade burst to replace a centrifuge,” lead author Steve Chen told Gizmodo, referring to one of the toylines based on the popular Japanese manga and anime Beyblade. “However, the rotation speed wasn’t sufficient to separate blood. Then we jumped to use fidget spinners to do the tests.”

The tests they came up with were relatively simple. First, they put small samples of blood into three slender tubes and taped each tube to a different arm of a store-bought spinner. Then they spun the spinner as you typically would, waiting for it to stop on its own before spinning it again. They spun them as long as it took before they could see a decent amount of the distinctive yellowish plasma.

On average, it took around four to seven minutes for plasma to separate out, with three to five finger flicks needed. Tests showed that, on average, about 30 percent of the total plasma in a sample had been filtered out through the fidget spinning. But the filtered plasma was also 99 percent pure plasma.

To test their devices a step further, they laced the blood with a protein belonging to the HIV-1 virus, the most common form of the disease. And when they screened the filtered plasma with a paper detection test looking for that specific protein, they were able to confirm the presence of the virus. All in all, the fidget centrifuge had done its job well enough.

Their findings were published in December in Analytical Chemistry.

Chen said that cheaper, low-tech centrifuges would be a boon to doctors and hospitals in areas where medical resources are limited. You wouldn’t have to worry about keeping blood samples preserved for long periods of time before they can be transported to major testing centers. And the added speed in getting back results should make people more willing to go through with testing.

“My personal interest is to develop diagnostic systems for use in resource-limited areas, so I always tell my students to imagine that they will do all the tests in the desert and they can only carry a backpack with them,” said Chen.

As it turns out, this isn’t the first time scientists have tried to create a low-tech centrifuge. In 2017, bioengineers at Stanford University were able to create and successfully test out a paper centrifuge with materials totaling 20 cents. The design of the paperfuge was also based on an ancient spinning toy, the whirligig.

Given the recent fidget spinner fad, though, there should be plenty of discarded spinners to retrofit into centrifuges. Chen noted that his four-year-old son has three fidget spinners of his own.

Chen and his team are already trying to test out their spinner centrifuges in the African country of Malawi, conducting blood tests on site. They’re also testing custom-made, 3D-printed, handheld devices that should be even more efficient, low-tech centrifuges. They hope to publish research on those devices later this year.

[Analytical Chemistry via the American Chemical Society]

via Gizmodo https://gizmodo.com

January 24, 2019 at 09:15AM

Cool: Guy Takes Stroll Atop World’s Deepest Lake With Crystal Clear Ice On Top

https://geekologie.com/2019/01/cool-guy-takes-stroll-atop-worlds-deepes.php


This is a six-minute video of a man walking around on Lake Baikal (the world’s deepest lake) in Siberia while it’s got an almost 6-inch sheet of crystal clear ice cover. In his own words while I swish my feet around in the shower and pretend I’m walking on water, but am really just trying to make sure all the pee goes down the drain:

“Walk on the incredibly transparent, crystal clear ice. It felt like I was standing on the water or walking on a very fragile glass. Despite this, I knew it’s safe to be there as thickness of ice was about 15cm

I could clearly see the bottom of the lake, stones, fish. All of this made me feel like I was looking into a fairyland, which never existed anywhere apart from dreams. Such a miracle. Iced Baikal is something really extraordinary and mesmerizing”

Obviously, with a max depth of 1,642-meters (5,387-feet), he’s not looking at the bottom of the deepest portion of the lake, but some of its much, much shallower waters towards the shore. And speaking of shallow– “You should never judge a book by its cover.” Exactly, especially since *removing book jacket* TA-DA! “It’s actually a nudie magazine.” Works great everywhere there isn’t somebody looking over your shoulder.

Keep going for the whole video.

Thanks to Jacoby, who agrees it would have been even more nuts to see a body under that ice.

blog comments powered by Disqus

via Geekologie – Gadgets, Gizmos, and Awesome https://geekologie.com/

January 23, 2019 at 04:30PM