Meet Jonathan Albright, The Digital Sleuth Exposing Fake News

https://www.wired.com/story/shadow-politics-meet-the-digital-sleuth-exposing-fake-news

When we met in early March, Jonathan Albright was still shrugging off a sleepless weekend. It was a few weeks after the massacre at Marjory Stoneman Douglas High School had killed 17 people, most of them teenagers, and promptly turned the internet into a cesspool of finger pointing and conspiracy slinging. Within days, ultraconservative YouTube stars like Alex Jones had rallied their supporters behind the bogus claim that the students who survived and took to the press to call for gun control were merely actors. Within a week, one of these videos had topped YouTube’s Trending section.

Albright, the research director for Columbia University’s Tow Center for Digital Journalism, probes the way information moves through the web. He was amazed by the speed with which the conspiracies had advanced from tiny corners of the web to YouTube’s front page. How could this happen so quickly? he wondered.

When the country was hungry for answers about how people had been manipulated online, Jonathan Albright had plenty of information to feed them. With the midterm elections on the horizon, he’s working to preempt the next great catastrophe.

Lauren Joseph

So that weekend, sitting alone in his studio apartment at the northern tip of Manhattan, Albright pulled an all-nighter, following YouTube recommendations down a dark vortex that led from one conspiracy theory video to another until he’d collected data on roughly 9,000 videos. On Sunday, he wrote about his findings on Medium. By Monday, his investigation was the subject of a top story on Buzzfeed News. And by Thursday, when I met Albright at his office, he was chugging a bottle of Super Coffee (equal parts caffeine boost and protein shake) to stay awake.

At that point, I knew Albright mainly through his work, which had already been featured on the front pages of The New York Times and The Washington Post. That, and his habit of sending me rapid-fire Signal and Twitter messages that cryptically indicated he’d discovered something new that the world needed to know. They came five at a time, loaded with screenshots, links to cached websites, and excerpts from congressional testimony, all of which he had archived as evidence in his one-man quest to uncover how information gets manipulated as it makes its way through the public bloodstream. This is how Albright has helped break some of the biggest stories in tech over the past year: by sending journalists a direct message late in the night that sounds half-crazy, but is actually an epic scoop—that is, if you can jump on it before he impatiently tweets it out.

So when I finally asked to meet Albright, the man who’s been conducting some of the most consequential and prolific research on the tech industry’s multitudinous screwups, I expected to find a scene straight out of Carrie Mathison’s apartment: yards of red string connecting thumbtacked photos of Mark Zuckerberg, Steve Bannon, and Vladimir Putin. At the very least, rows of post-docs tapping away on their Macbooks, populating Excel spreadsheets with tips to feed Albright their latest.

Instead, his office—if you can call it that—sits inside a stuffy, lightless storage space in the basement of Columbia University’s journalism school. The day we met, Albright, who looks at least a decade younger than his 40 years, was dressed in a red, white, and blue button-down, khakis, and a pair of hiking boots that haven’t seen much use since he moved from North Carolina to New York a little more than a year ago.

From a hole in the ceiling, two plastic tubes snaked into a blue recycling bin, a temporary solution to prevent a leaky pipe from destroying Albright’s computer. His colleague has brightened her half of the room with photos and a desk full of books. Albright’s side is almost empty, aside from a space heater and three suitcases he keeps at the ready as go-bags for his next international lecture. A faux window in the wall opens on yet another wall inside the basement, or as Albright calls it, “basically hell.”

And yet, out of this humble place, equipped with little more than a laptop, Albright has become a sort of detective of digital misdeeds. He’s the one who tipped off The Washington Post last October to the fact that Russian trolls at the Internet Research Agency reached millions more people on Facebook than the social media giant initially let on. It’s Albright’s research that helped build a bruising story in The New York Times on how the Russians used fake identities to stoke American rage. He discovered a trove of exposed Cambridge Analytica tools in the online code repository Github, long before most people knew the shady, defunct data firm’s name.

Working at all hours of the night digging through data, Albright has become an invaluable and inexhaustible resource for reporters trying to make sense of tech titans’ tremendous and unchecked power. Not quite a journalist, not quite a coder, and certainly not your traditional social scientist, he’s a potent blend of all three—a tireless internet sleuth with prestigious academic bona fides who can crack and crunch data and serve it up in scoops to the press. Earlier this year, his self-published, emoji-laden portfolio on Medium was shortlisted for a Data Journalism Award, alongside established brands like FiveThirtyEight and Bloomberg.

“He really is the archetype of the guy in the university basement, just cracking things wide open,” says David Carroll, a professor of media design at The New School, whose criticism of Facebook has made him a close ally.

But more than that, Albright is a creature of this moment, emerging at a time when the tech industry’s reflexive instinct to accentuate the positive has been exposed as not only specious but dangerously so. Once praised for inventing the modern world, companies like Facebook and Google are now more commonly blamed for all of the toxicity within it. Albright understood years before Silicon Valley or Washington did that technology was changing the media, and that the study of public discourse online was about to become one of the most important disciplines in the world.

In some ways, getting people to pay attention was the easy part. When the country was hungry for answers about how people had been manipulated online, Albright had plenty of information to feed them. But as keen as Albright’s insights have been, they came a little too late, years after propagandists sitting in St. Petersburg began messing with the US election. Raising awareness after the fact can only accomplish so much. With the midterm election on the horizon and the 2020 races not far off, it’s as important to predict what could go wrong next. “None of this is going to be solved, but once we understand more about the ecosystems, we can understand how to deal with it as a society,” Albright says.

The question is: Will everyone keep listening to him before it all goes wrong all over again?

For a long time no one cared about Albright’s research, until, all of a sudden, everyone did. One of his earliest projects, which looked at how journalists use hashtags, was rejected, Albright says, by “every frickin’ journal.” “No one cared about my work until it became political,” he adds with a shrug.

Lately Albright has been reevaluating his professional goals. “I can’t just be a first responder for propaganda,” he says.

Lauren Joseph

Albright took a roundabout route to academia, which ended up paying off. After graduating from Oregon State University, he took on jobs at Yahoo and later as a contractor at Google, where he monitored Google News and search results. The work was tedious and grueling. “The people who have that job permanently? I don’t envy them,” he says.

But it also got him thinking about just how much humans affect the decisions algorithms make. “It fascinated me that people were solving these things. They were cleaning up or trying to fix what machines broke,” he says.

That realization propelled Albright on to New Zealand and a PhD program at the University of Auckland in 2010, a critical juncture for the world of tech. Just a year before, Twitter had begun hyperlinking hashtags, transforming what had been simply a static flourish into a functional tool for navigating the Wild West of social media. For the first time, users could click on a hashtag and see all the tweets that included it. This digital link allowed people to form entirely new communities online—including dark ones. Albright studied how this new feature helped people organize around events like the Occupy protests and the earthquake and tsunami in Japan in 2011. Hashtags, he believed, were the new front page, and were poised to transform both media and media consumption.

His inadvertent tumble into the maelstrom of American politics began late one night, shortly after the 2016 election. At the time, Albright was on sabbatical from his job as an assistant professor of media analytics at Elon University in North Carolina. He had been working on a Pew Research report on trolls and fake news online. Despite his research, Albright was as surprised as anyone by Donald Trump’s victory over Hillary Clinton, and in some vague effort to understand what had just happened, he pored over a Google spreadsheet filled with links to sites that spread false news throughout the campaign.

He wanted to see if there were any common threads among them. So, using open source tools, Albright spent the late hours of the night scraping the contents of all 117 sites, including neo-Nazi outlets like Stormfront and overtly conspiratorial sites named things like Conspiracy Planet. He eventually amassed a list of more than 735,000 links that appeared on those sites. Looking for signs of coordination, Albright set to work identifying the links that appeared on more than one shady website and found more than 80,000 of them. He dubbed this web of links the “Micro-Propaganda Machine” or #MPM for short. (For Albright, all the world’s a hashtag.)

After about 36 hours of work, during which his software crashed dozens of times under the weight of all that data, he was able to map out these links, transforming the list into an impossibly intricate data visualization. “It was a picture of the entire ecosystem of misinformation a few days after the election,” Albright says, still in awe of his discovery. “I saw these insights I’d never thought of.”

And smack in the center of the monstrous web, was a giant node labeled YouTube.

Albright’s first map uses scraped data to project the route of fake news across digital platforms

Jonathan Albright

This work laid bare the full scope of misinformation online. “It wasn’t just Facebook,” Albright realized, the fake news industry had infiltrated the entire internet. Fake news sites were linking heavily to videos on YouTube that supposedly substantiated their messages. He found Amazon book referrals and Pinterest links, links to custom T-shirts on CafePress, and tons of links to mainstream media organizations like The New York Times. Far right propagandists were sharing links to the mainstream media only to refute and distort their reporting.

It was this post that first caught the attention of Carole Cadwalladr, a British journalist at The Guardian best known for breaking the news about Cambridge Analytica surreptitiously harvesting the Facebook data of as many as 87 million Americans. Cadwalladr called Albright for a story she was working on about how fake news and conspiracy theories pollute Google search.

It was late in London where Cadwalladr is based, but she and Albright talked for hours about what he’d found. In that first conversation, Cadwalladr says, “He was just absolutely streets ahead of most other academics and journalists in having this sense of the bigger picture of what’s going on.” In fact, Albright was the first person who ever mentioned the name Cambridge Analytica to Cadwalladr. “When he was explaining how fake news works, he said there are companies like Cambridge Analytica that can track you around the internet,” Cadwalladr says. “Whenever I tell the Cambridge Analytica story, I always start with Jonathan and this night.”

Cadwalladr published Albright’s data visualization in full that December, kicking off what Albright described as an “avalanche” of attention. He still stores the hard copy of that day’s paper in his desk drawer, where it’s protected from the leaky ceiling.

But what neither he nor Cadwalladr knew then was that Albright’s research only addressed a fraction of the way the web distorted information. Buried in the bubbles of Albright’s fake news map were thousands of fingerprints of Russian trolls: ads they purchased, websites they created, events they planned. At the time, Facebook CEO Mark Zuckerberg still maintained that it was a “pretty crazy idea” to say that fake news had influenced voters on Facebook. Nearly a year would pass before Albright realized he was sitting on plenty of evidence showing that maybe that idea wasn’t so crazy after all. Zuckerberg eventually came around to that conclusion too.

By the summer of 2017, the media attention surrounding Albright’s research earned him a job offer at Columbia University, where he took on the role of research director at the Tow Center. Without hesitation, Albright packed up his hiking boots, shoved his belongings into storage, and moved to a one-bedroom apartment in Manhattan. His uptown apartment was more than 500 miles away from his partner, who was still working at Elon University at the time, and several subway stops from Columbia’s tony on-campus housing, which remains in short supply for newcomers, where most of his colleagues live.

This new location was isolating, and left plenty of time for Albright’s obsessions. And so, on another late night, Albright found himself alone with his laptop again, chasing down the truth tech giants were hoping to hide.

Just a few weeks before, Facebook had published a blog post laying out the abridged version of what is now the subject of dozens of hours of congressional testimony and endless breaking news headlines. Beginning in 2015, the blog post read, Facebook sold 3,000 ads promoting “divisive social and political messages across the ideological spectrum” to Russia’s Internet Research Agency. The post didn’t include the names of the accounts, details about who was targeted, or much information at all.

Albright had already spent nearly a year parsing through his data on the fake news ecosystem, publishing posts on the ad-tracking tools fake news purveyors used and unearthing YouTube accounts behind a suspicious network of 80,000 videos that had been uploaded, thousands at a time. In his data dives, he’d seen plenty of interconnected Facebook accounts, but had no way of knowing who was creating them. When Facebook finally fessed up, Albright felt certain that if Russian trolls had conducted information warfare on Facebook, the wreckage of those battles was sitting in his spreadsheets.

Eventually, Facebook told Congress that the ads reached 10 million people, but Albright believed Facebook’s focus on ads was an intentional misdirection. What really counted, and what the company had said nothing about, was how many people the trolls reached in total on their phony Facebook Pages, not just through paid ads. That number would likely be much higher, he thought, and the people who followed these Pages would be far more influenced by those posts than any one ad.

He wanted to shift the focus of the argument to what he felt mattered, so Albright started digging. At the time, Facebook hadn’t released a list of the phony account names to the public, so Albright looked to names that had been leaked in the press. The Daily Beast had reported that a fake page called Being Patriotic was organizing pro-Trump rallies in Florida. CNN had found that a Russian page called Blacktivist had more Facebook likes than the real Black Lives Matter group. Within weeks, the names of six of the IRA’s 470 phony Facebook accounts had gotten out. For Albright, that was enough.

Using a loophole in a tool called CrowdTangle, which is owned by Facebook, he started collecting data on each account’s reach. He spent almost three days piecing together clues from different parts of CrowdTangle and reassembling them into a full picture. “It wasn’t a hack,” Albright told me coyly, “but it was a workaround.”

The work paid off: When he was finished, Albright found that content posted by those six pages alone could have been shared as many as 340 million times. That figure is a maximum that would require that every single follower of every single page actually see each piece of content those pages published, which is unlikely. Still, if just a sliver of the 470 accounts had gotten that much traction, Albright knew, the real number of people who had seen the IRA’s posts had to be far greater than the 10 million figure Facebook had acknowledged. He scraped the contents from as many posts as he could on both Facebook and Instagram, uploaded them to an online repository, and contacted The Washington Post. “Some of this was too important for public knowledge and access,” Albright said. “It couldn’t wait two years to get into some journal.”

The story ran on October 5, 2017, under the headline: “Russian propaganda may have been shared hundreds of millions of times, new research says.”

The email from Facebook’s corporate communications followed soon after, asking if Albright would speak to CrowdTangle’s CEO Brandon Silverman about his findings.

Albright described that conversation, which he agreed to keep confidential, as a “weird, awkward exchange.” He knew he’d struck a nerve. He also knew Facebook wasn’t going to let him get away with it. “I knew once the story came out they’d find whatever I did and break it—and they did,” he says.

Silverman says Albright spotted a bug in the CrowdTangle system, which he acknowledges was embarrassing for his young company. “My response probably involved some sort of act of hitting myself,” Silverman says of the story in the Post. But he was equally impressed with Albright’s work. “The reason he found it is because he was so deep inside the platform, which I think is great,” Silverman says. “Jonathan has been a really important and influential voice in raising a lot of awareness on misinformation, abuse, and bad actors. I think the work he’s doing is the sort of work we need more of.”

Within days, Facebook fixed the bug. CrowdTangle also changed the metrics it uses to more accurately estimate how many people may have seen a given post. Facebook had effectively cut off Albright’s little exploit. But they couldn’t claw back the data he’d already stashed away.

For Albright, the CrowdTangle findings were a seed that germinated, weed-like, far beyond Facebook. He spent late nights and early mornings studying what he’d found and soon realized that the very same memes and accounts he discovered there were popping up on other, less discussed, platforms. He collected IRA-linked ads on Instagram that Facebook hadn’t yet publicly reported. A reporter at Fast Company took notice, and afterward, Facebook discreetly added a last-minute bullet point to an earlier blog post, acknowledging that yes, the Russian trolls had abused Instagram, too.

He used a list of Twitter handles associated with the IRA to scour sites like Tumblr for suspicious content posted by accounts of the same name. He found plenty, including particularly egregious posts intended to stoke outrage about police brutality among black Americans. Albright helped Craig Silverman at Buzzfeed News break the story. A month later, Tumblr announced that it had indeed found 84 IRA accounts on the platform. Albright had already identified nearly every one of them.

“You have a conversation with Jonathan, and you feel like you’ve just learned something that he realized six months or a year earlier,” Craig Silverman says.

I know the feeling. More often than not, a single message from Albright (“rabbit hole warning”) leads me to so many unanswered questions I’d never thought to ask. Like in late February, when Albright finished reading through the follow-up answers Facebook’s general counsel, Colin Stretch, sent to the Senate following his congressional testimony.

Albright realized then that over the course of hours of testimony and 32 pages of written responses, Stretch never once mentioned how many people followed Russian trolls on Instagram. That struck Albright as a whale of an oversight because, while the Russians posted 80,000 pieces of content on Facebook, they posted 120,000 on Instagram. (And yes, these are numbers Albright knows by heart.)

When, prompted by Albright, I asked Facebook this question in March, they said they hadn’t shared the number of people who followed Russian trolls on Instagram because they—a multibillion-dollar company that had already endured hours of congressional interrogation—hadn’t calculated that number themselves. Another time, Albright mentioned that Reddit still hosted live links to websites operated by the Internet Research Agency. Several of these accounts had already been deleted, but at least two were still live. When I asked Reddit about the accounts, the company suspended them within hours. (Disclosure: Conde Nast, which owns WIRED, has a financial stake in Reddit.)

Silverman, of Buzzfeed, describes Albright’s research as a “service” to the country. “If Jonathan hadn’t been scraping data and thinking about cross-platform flow and shares and regrams and organic reach and all these other pieces that weren’t being captured, we wouldn’t know about it,” he says.

But this work has come at a personal price. When I ask Albright what he does to blow off steam, he says he travels—for work. Nights and weekends are mostly spent in the company of his spreadsheets, which are often loaded up with fragments of humanity’s worst creations—like conspiracy theories about survivors of a high school shooting.

And, yet, that’s also what drives him. When the Parkland shooting happened, Albright says he was thinking about slowing down, and taking some time to plot his next steps. But he couldn’t help himself. “The fact that I was seeing Alex Jones reply to a teenager who had just survived a mass shooting, and he’s calling him a fake and a fraud, that’s just un-fucking-believable,” Albright says. “I’ve never seen that. Not like that.”

By late spring of this year, almost two years after he’d started this quest, it looked like even Albright was beginning to realize this life was unsustainable. Especially given the distance, Albright’s erratic schedule requires a lot of patience on his partner’s part. “You have to be pretty laid back and cool to be with someone who’s doing this kind of work,” he says. And this spring, Albright’s father died of cancer, requiring him to take some time off and make the cross-country trip home to Oregon. Recently, his typically busy Twitter feed has quieted down. He’s hard at work on a book proposal and thinking up ideas to pitch as part of Facebook’s new research initiative, which grants independent academics access to the company’s data.

Silverman, of CrowdTangle, says he hopes this initiative will help Facebook work more closely with people like Albright. “The really best case scenario is there will be a lot of Jonathans and a larger community of folks flagging these things and helping raise awareness,” he says.

Meanwhile, tech giants have begun taking some responsibility for the mess they made. Facebook and other tech companies have started making major changes to their ad platforms, their data-sharing policies, and their approaches to content moderation.

Albright knows that returning to writing academic papers that warn of some far off threat may not capture the public imagination or make the front page of the country’s leading newspapers. That’s particularly true given that tech companies, federal investigators, and vast swaths of the country are still looking in the rear-view mirror. But he’s ready to start looking forward and thinking holistically, not forensically, about the influence technology has on our lives.

“I can’t just be a first responder for propaganda,” Albright told me in May, several months after we met under the leaky pipes.

Yet, minutes later, he had a new discovery to share. The day before, Democrats on the House Intelligence Committee published all 3,500 ads Russian trolls bought on Facebook in the run-up to the election. Aside from Albright’s own collection, it was the most thorough look yet at how Russia had used social media to try to influence American voters. Albright couldn’t help but take a look. He compiled the ads into one 6,000-page PDF, and as he scanned the images he realized a lot of the ads were familiar. And then he noticed a series of ads unlike any of the ones he’d seen before. He thought I might find them interesting. He suggested I take a closer look.


More Great WIRED Stories

via Wired Top Stories https://ift.tt/2uc60ci

July 18, 2018 at 06:03AM

Posted in Family | Tagged , | Leave a comment

Voting machine maker sold states systems with remote-access tools

https://www.engadget.com/2018/07/17/voting-machine-remote-access-elections/


David Becker / Reuters

Election Systems and Software (ES&S) has admitted that it sold election management systems which included remote-access software to multiple US states over six years. The company said in a letter to Sen. Ron Wyden it had included off-the-shelf pcAnywhere software on some machines, which it sold “to a small number of customers between 2000 and 2006,” Motherboard reports.

That admission raises a lot of questions over how seriously ES&S took its security, and it contradicts statements it previously made to reporters that it had no knowledge of selling machines with remote-access capabilities. Lying to lawmakers can carry severe consequences down the line if that deception is discovered, so ES&S was forced to make the confession.

In 2006, at least 60 percent of US ballots were logged using ES&S election management systems. Those machines count all of a county’s ballots and in some cases are used to program all voting machines in a county.

Those machines with pcAnywhere installed on them also had modems which gave ES&S technicians access to the systems — as well as possibly opening the door to infiltrators. ES&S claimed in its letter that the modems were set up to only make outgoing calls, so it was up to election officials to grant the company’s technicians remote access. However, it’s unclear exactly how those connections between the company and the systems were secured. In one case in 2006, technicians spent hours remotely accessing an election management system to resolve a voting discrepancy issue in an election.

Hackers stole the pcAnywhere source code in 2006, letting them scrutinize the code for vulnerabilities and potentially act on those. Creator Symantec didn’t disclose the theft until 2012, when it urged those who were using the software to disable or uninstall it until the company could patch any security flaws. Around the same time, it emerged there was a vulnerability in pcAnywhere that let hackers access a system without password authentication. A security firm found that 150,000 computers with pcAnywhere installed were set up in a way that allowed direct access. It’s unclear whether any election officials installed patches for the vulnerabilities.

ES&S says it stopped installing pcAnywhere in December 2007, when new voting system standards came into play, ensuring that any future voting systems could only include essential software for voting and counting ballots.

The company told Wyden that including remote-access software was an accepted practice at the time among tech firms, “including other voting system manufacturers.” ES&S did not disclose where it had sold the machines including the software, only confirming that customers who had systems with pcAnywhere “no longer have this application installed.”

It’s not clear exactly when ES&S confirmed with its clients that the software had been removed, or how recently it was active on their machines (it was still in use in at least one case in 2011). Regardless of when pcAnywhere was removed, its mere existence on election management systems was fundamentally a security flaw.

News of the vulnerability is more pertinent than ever, following recent indictments of Russian hackers who allegedly tried to influence the 2016 presidential election; the indictments indicated that they focused on companies that create election software. More than 20 percent of voters cast ballots on machines in that election.

Wyden told Motherboard that he was waiting on answers to other questions he had posed ES&S in March. The company said it would meet with him to discuss election security in private, but it declined to send a representative to a Senate Committee on Rules and Administration hearing on the issue last week.

via Engadget http://www.engadget.com

July 17, 2018 at 04:51PM

Posted in Family | Tagged , | Leave a comment

Netflix’s Fast.com now measures upload speed and latency

https://www.engadget.com/2018/07/17/netflix-fast-com-upload-speed-latency/


Netflix

In 2016, Netflix launched Fast.com, a simple, easy way for anybody to check their internet speeds. Now, the company has announced that it’s adding more information to the site. Fast.com will now let users see their connection’s latency and upload speed, and latency will be broken down into both loaded and unloaded connections. So, you’ll be able to see if more traffic on your network affects your latency values. “We’re adding these new measurements to Fast.com so that consumers will have a more comprehensive view of their internet connection speed at any given time,” Netflix said in a statement.

Netflix also has app versions of the speed test for both Android and iOS. The streaming giant said that Fast.com has generated over half a billion speed tests since launching and its usage rate has doubled over the past seven months alone.

via Engadget http://www.engadget.com

July 17, 2018 at 06:09PM

Posted in Family | Tagged , | Leave a comment

Roland’s latest smartphone mixer can record your entire band

https://www.engadget.com/2018/07/18/roland-smartphone-mixer-record-entire-band/


Roland

Your phone’s mic is fine for quick audio memos and the like, but if you want to record music on it, you need something a little more robust. Roland revealed a palm-sized audio mixer at CES in 2017, a little device called Go:Mixer that could record up to five audio sources to your phone. Now the company has a new version, the Go:Mixer Pro, a similarly small sound mixer that can handle up to nine instruments at one time, including powered mics, guitars, basses and other line-level instruments.

The Go:Mixer Pro has a handy little dock to place your smartphone in while recording audio and/or video of your jam session. It works with Roland’s video apps, 4XCamera or Virtual Stage Camera, to create split screen and replace the background of your video, respectively. You power the mini mixer with your smartphone or a AAA battery, and it has a headphone output so you can monitor your performance. You can bring in backing tracks from your smartphone to sing over, and a Center Cancel function that will try and erase vocals for your very own karaoke experience. Unfortunately, there’s no mention of price or release date for this tiny mixer just yet, though you can probably safely assume it will run more than the original Go:Mixer’s $99.

via Engadget http://www.engadget.com

July 18, 2018 at 06:39AM

Posted in Family | Tagged , | Leave a comment

Apple iCloud data in China is being stored by a state-run telco

https://www.engadget.com/2018/07/18/apple-icloud-data-china-stored-state-run-telco/


Pixabay

Six months ago Apple caused controversy by moving Chinese users’ iCloud keys out of the US and into China, in order to comply with Chinese law. Now, that data, which includes emails, text messages and pictures, is being looked after by government-owned mobile operator China Telecom. And users and human rights activists alike have big concerns.

The move has unsurprisingly been praised by state media, with Chinese consumers being told they can now expect faster speeds and greater connectivity. But as comments on Weibo (China’s equivalent of Twitter) reveal, users have major privacy worries, claiming the government — known for its extreme citizen surveillance methods — will now be able to check personal data whenever it wishes.

Apple has repeatedly stated that its hands are tied on the matter — Chinese legislation has essentially given the company a “comply or die” ultimatum. However, Apple users in China are able to opt out of local data storage by choosing an alternative country for their iCloud account, but it’s not clear whether that would delete previous information, or simply migrate it to the new server. If you have concerns about the government snooping at your data, you might just want to start an account from scratch.

via Engadget http://www.engadget.com

July 18, 2018 at 08:51AM

Posted in Family | Tagged , | Leave a comment

Apple’s MacBook eGPU is a step towards winning back creative pros

https://www.engadget.com/2018/07/18/apple-macbook-egpu-gpu-creative/



Apple

Even though Apple makes a lot more money on iPhones and iPads, Macs are still crucial to its bottom line. For years, they were widely loved by creative folks and influencers because they were simpler and more powerful than Windows PCs. Now, content creation pros and designers are falling out of love with Apple. Many see the MacBook Pro’s Touch Bar as a dumb consumer gimmick, and worse, Apple’s top-end laptops have failed to keep pace technologically with powerful, well-designed PCs from Microsoft, Dell and others.

Many of Apple’s most passionate fans are journalists, programmers and high-profile designers who are loud on social media when they like and dislike what Apple is doing. Such complaints have made it to the ears of Apple brass. Last year, VP Craig Federighi admitted that they “designed ourselves into a bit of a corner” with the Mac Pro, and vowed to release a model this year (it now says it won’t come until 2019).

The new MacBook Pros with eighth-generation Intel CPUs help, but many folks still aren’t thrilled. The top-of-the-line model, which starts at $2,799, offers less than half the graphics performance of the $2,399 Razer Blade, for instance. But Apple also did something pretty canny. It unveiled a Thunderbolt 3 external GPU (eGPU) it co-designed with BlackMagic Design for $700, which is a reasonably competitive price next to other third-party models. It’s a strong sign that it hasn’t given up on the creators that are most passionate about its products.

BlackMagic Design’s eGPU connects to any Mac with a Thunderbolt 3 port, and packs a Radeon Pro 580 with 8GB of DDR5 graphics memory. That puts it nearly on par with NVIDIA’s 6GB GeForce GTX 1060, and the extra 2GB is hugely valuable for 3D modeling and other productivity chores. It also doubles the performance of the Radeon Pro 560X in the latest MacBook Pro, and since the same card is used in the 27-inch iMac, driver support should be rock-solid. The Thunderbolt 3 connection will let you plug in LG’s Apple-approved LG Ultrafine 5K display, BlackMagic says.

On top of that, you get four USB3 ports, two Thunderbolt 3 ports and an HDMI 2.0 port, letting you plug in a 4K monitor and other accessories, and even charge your laptop. It should work with any MacBook Pro released after 2016.

Attaching the box will significantly boost the horsepower of a MacBook Pro, but there are a few notable downsides. The most significant is that the Radeon Pro 580 card inside the eGPU can’t be upgraded, so in a couple of years the box might seem doggy compared to the Radeon Pro 680 or whatever comes along next.

So who should buy this? If you want to own a MacBook Pro and use video editing and graphics apps like Premiere Pro CC, Final Cut Pro X and Blackmagic’s own DaVinci Resolve 15, then it should be on the front of your mind. Blackmagic Design, for one, said the eGPU can speed up tasks like spatial noise reduction by up to seven times. It will also make GPU-intensive tasks like gaming and virtual reality work better, too. Support for those tasks is notoriously poor on macOS, so every little extra bit of performance you can eke will help.

Apple first offered macOS eGPU support earlier this year, and there are a number available, like the ASUS XG Station Pro model. It costs $329, not including a graphics card, so you’d be looking at about $630 including the Radeon Pro 580 card, just shy of the price for BlackMagic’s eGPU. However, the latter includes many more ports and a far more polished design. It is also sold directly by Apple — a strong appeal for Mac buyers.

Apple didn’t stop there with the new MacBook Pros. You can now add up to 32GB, instead of 16GB of RAM, which is also a must for content creation apps, and the eighth-gen CPUs will also significantly boost rendering and other tasks. With redesigned keyboards that hopefully won’t break, the MacBook Pro is again a viable creation tool.

The eGPU and new MacBook Pros are a good start, but Apple still has a lot to do. For the price, the MacBook Pro remains underpowered next to Windows machines, particularly when it comes to graphics. And macOS needs work under the hood so that software like Adobe Premiere Pro CC works as well on Mac as they do on PC.

Oddly, Microsoft is even stealing Apple’s design thunder with more interesting models like the Surface Book 2 and Surface Studio all-in-one. Apple has stuck with more or less the same MacBook design for the last decade, and badly needs to refresh it to keep up.

Although Apple didn’t design the new eGPU, it struck me as as a breath of fresh air for the stodgy lineup. If Apple can’t or won’t refresh its MacBook lineup to the same standard as Windows models, it’d do well to team up with Blackmagic or other third-parties on equally interesting products that can help creatives. Many of those people are willing to pay a premium for products they like, so it would be a shame for Apple to lose their business — and cheerleading — to a key competitor.

via Engadget http://www.engadget.com

July 18, 2018 at 11:33AM

Posted in Family | Tagged , | Leave a comment

Blue Origin to subject its rocket to high-altitude escape test

https://arstechnica.com/?p=1345297


Enlarge /

New Shepard on the launch pad the morning of Mission 8, April 29, 2018.

Blue Origin

12:10pm ET Wednesday update. The test appears to have been a complete success. See our full report here.

Original post: As it continues to progress toward human flights, Blue Origin will perform another potentially dangerous uncrewed test today of its New Shepard rocket and spacecraft. Although it has not yet provided details, the company says it will fly ”a high altitude escape motor test—pushing the rocket to its limits.” The test is scheduled to begin at 10 am EDT (14:00 UTC) at the company’s West Texas launch site. (Update: the time has slipped to 11am ET).

This is the ninth test of the reusable New Shepard system and the third in which it has included commercial payloads on its short suborbital flights. This time, the company is also flying a suite of materials from Blue Origin employees as a part of its internal “Fly My Stuff” program. (It’s unclear at this point exactly how “abort test” and “payload” fit together in the same mission—presumably the high altitude abort will be followed by the New Shepard spacecraft pressing to space, but we’re not exactly sure. Blue Origin will have more details about exactly what’s going on when its webcast starts.)

This is not the first high-energy test of New Shepard. In October, 2016, the company conducted a lower altitude in-flight escape test when engineers intentionally triggered the spacecraft’s launch abort system at about 45 seconds after launch and an altitude of 16,000 feet. Such systems are designed to fire quickly and separate the crew capsule from the booster during an emergency.

via Ars Technica https://arstechnica.com

July 18, 2018 at 07:37AM

Posted in Family | Tagged , | Leave a comment