Chinese iCloud customers have been notified that Apple will transfer operations of its cloud storage service to the local firm Guizhou on the Cloud Big Data (GCBD) starting next month. Apple announced the partnership with GCBD last year and claims the new iCloud operations will help the company comply with Chinese regulations. As of February 28, Apple will start the transfer of Chinese iCloud data to its new data center in Guizhou, where it will be managed by GCBD.
This means that the physical location of Chinese iCloud customers’ data will change, but customers shouldn’t see any differences on their end of their iCloud accounts. In Apple’s message sent to mainland Chinese customers, the company says the new operations setup will “enable us to continue improving the speed and reliability of iCloud and to comply with Chinese regulations.” Customers are urged to consider the new terms and conditions of iCloud operated by GCBD, and customers who are not comfortable with GCBD partnership can terminate their accounts.
While the new agreement hands off local iCloud operations to GCBD, both the Chinese firm and Apple have access to data stored in Chinese iCloud accounts. In a statement provided to 9to5Mac, Apple reassures customers that their data is still just as safe and private as it was before the partnership. “Apple has strong data privacy and security protections in place and no backdoors will be created into any of our systems,” the statement says.
Chinese customers may be concerned about the safety of their photos, documents, and other iCloud data because GCBD is owned by the Guizhou provincial government. Apple’s partnership with GCBD is another means for China’s government to control data accessible within its territory.
Apple has been trying to play nicely with China for a while now, but the back-and-forth between the tech giant and the local government has often resulted in frustrations for Chinese Apple customers. Last year, Chinese internet users became incensed when Apple removed many VPN apps from its App Store, citing a new rule that all VPN services had to be approved by the Chinese government. VPNs allow Chinese citizens to browse the Internet freely, without the constraints of its Great Firewall.
Chinese Apple Watch Series 3 users also ran into issues when the government shut down LTE service to the new wearable nearly one month after it became available. Series 3 Watches use eSims installed in the device to access LTE data. However, the Chinese government doesn’t have a system in place to regulate eSIM use in the country, so it blocked LTE access from those devices citing “security concerns.” Apple’s support page now states that cellular service for Series 3 Watches will arrive in select Chinese cities sometime in 2018, a timeframe that appears to have been pushed back from the original promise of a late-2017 roll-out.
Wondering why Snapchat felt compelled to redesign its app, besides the need to compete with Facebook? It should be clearer after today. The Daily Beast has obtained Snap data showing that the company has had a difficult time getting users to try features outside of the core chat service. Snap Maps location sharing had over 30 million users when it premiered in June, for example, but that dropped to 19 million users (11 percent of the user base) by September. And despite Snap’s eagerness to push Discover shows, only 21 percent of Snapchat users (a still-sizeable 38 million) visit the section daily.
The figures show that users are at least loyal. The average user sends 34 messages a day, and the number of people sending daily snaps increased from just under 80 million to nearly 88 million between April and September. People are using Snapchat — they just haven’t been compelled to try much beyond the core functions.
Snap has declined to comment on the leak. As The Vergeexplained, though, the leak puts the Snapchat redesign into focus. Merging Snap Maps with the Discover section might encourage people to check out both, while merging friends’ Stories and snaps could boost the company’s ad revenue no matter how well other features pan out. The new interface is about ensuring Snap’s long-term survival, not just a desire to shake things up.
Imagine you and a group of friends are at the peak of a mountain after a long hike. It’s sunset and the sky is alight; you want to take a photo. You pull out your smartphone, but instead of flipping it around to take a long-armed selfie, you unclip a tiny drone from the back of your phone, make it hover at the perfect height, and snap a series of photos, no extendo-arms required.
That’s the idea behind Selfly, the drone-in-a-phone-case built by camera and recording company AEE. Selfly is a drone that folds into the back of a phone case, and it includes a camera that can record, live stream and take photos in 1080p and 60fps, using a suite of Sony sensors. The case itself is just under half an inch thick, similar in style to an Otterbox, and it fits recent Apple and Android devices.
The Selfly will be available in the spring for $130, with a separate charging hub available for $30. The Selfly doesn’t charge in the phone case and it lasts about four minutes once it’s fully loaded. It has a hover function that lets users set it up in the air, at the perfect height, and it can take sweeping panoramic photos. Users can control the drone with virtual joysticks or use a point-and-fly method: Point to an area of the screen and the drone will fly there; pinch to zoom and it’ll fly farther out. The Selfly has a range of roughly 45 feet, definitely far enough for the selfie life.
Click here to catch up on the latest news from CES 2018.
The practice of incorporating microtransactions and loot boxes into video games has grown from sporadic to omnipresent in recent years. 2017 saw the loot box trend explode and even bleed over from a “cosmetic” model to one that affects gameplay. But in-game items like loot boxes—which commonly appear in multiplayer games—are worthless to publishers if players don’t engage with them.
The discovered papers emphasize ways to keep players “engaged” with different types of games, as opposed to quitting them early, by manipulating their difficulty without necessarily telling players. These papers were published as part of a conference in April 2017, and they indicate that EA’s difficulty- and matchmaking-manipulation efforts may have already been tested in live games, may be tested in future games, and are officially described as a means to fulfill the “objective function” of, among other things, getting players to “spend” money in games.
The EOMM paper, which is co-authored by researchers from EA and UCLA and was funded in part by an NSF grant, applies more directly to EA’s latest online-gaming controversies. This paper outlines a way to adjust games whose difficulty begins and ends not with computer-controlled difficulty issues (enemy strength, puzzle designs, etc.) but with real-life opponents.
“Current matchmaking systems…Â pair similarly skilled players on the assumption that a fair game is best player experience [sic],” the paper begins. “We will demonstrate, however, that this intuitive assumption sometimes fails and that matchmaking based on fairness is not optimal for engagement.”
Elsewhere in the paper, the EA researchers point out that other researchers seem to assume that “a fun match should have players act in roles with perceivably joyful role distribution. However, it is still a conceptual, heuristic-based method without experiment showing that such matchmaking system indeed improves concrete engagement metrics [sic].”
In other words, the researchers are operating in a data-driven manner, clarifying that they don’t necessarily see concepts like “fun†or “fairness†driving the engagement that embodies their thesis. And, as the paper notes, it’s engagement, not fairness or fun, that’s linked directly to a player’s willingness to continue spending money in the game.
EA’s researchers don’t necessarily see concepts like “fun†or “fairness†contributing to their thesis.
To test this thesis, in early 2016 EA ran a test on 1.68 million unique players engaged in 36.9 million matches of an unnamed 1v1 game whose matches can end in wins, losses, or draws. Though the paper doesn’t offer further specifics, EA Sports series like FIFA and NHL would fit the description given.
During the testing period, players were analyzed based on their skill level (itself based on wins, losses, and draws) and also their likelihood of “churning” away for at least eight hours after the match. The players were then assigned into one of four pools of different matchmaking techniques: skill-based; EOMM-sorted (the new matching algorithm intended to reduce churn); “WorstMM” (the complete opposite of the EOMM algorithm); and completely random matching.
The paper describes “existing matchmaking methods that heuristically pair similarly skilled co-players,” suggesting that live players were unwittingly dropped into EA’s experimental matchmaking pools for this engagement research. But thanks to vague methodology descriptions, and repeated discussion of “simulations” on existing player and match data, the paper makes it hard to determine if actual, live matchmaking was affected. (EA has yet to respond to Ars Technica’s request for comment.)
This EOMM paper also isn’t entirely clear about how a player’s perceived attributes—including “skill, play history, and style”—correlate with the same player’s churn likelihood. This means the paper’s thesis can’t be written out as simply as something like “bad players will play more often if they’re paired with even worse players.”
Ultimately, the paper concludes that this EOMM method of matchmaking reduced churn compared to the existing, skill-based matchmaking standard. In four of its five player-count studies, EOMM bested skill-based matchmaking by up to 0.9 percent; the exception was a smaller pool of players, in which skill-based matchmaking reduced churn more than EOMM by a factor of 1.2 percent. In all cases, EOMM bested both the random and “WorstMM” results.
The authors concede that this matchmaking system must evolve to account for factors such as team-battle video games, larger multiplayer scenarios, network connectivity issues, friends lists, and more. They say that “we will explore” all of those scenarios in future tests. The authors also make clear where this modeling could eventually lead:Â “we can even change the objective function to other core game metrics of interest, such as play time, retention, or spending. EOMM allows one to easily plug in different types of predictive models to achieve the optimization.”
If our guess about EA Sports 1v1 games is correct, then that division’s “Ultimate Team” products, driven by loot boxes and microtransactions, are already prime for the picking.
Missing whale metrics
The Dynamic Difficulty Adjustment [DDA] paper had previously been found and circulated by fans and critics in late 2017, though perhaps it didn’t receive much widespread attention because it didn’t declare much new in the games industry. This research paper is a higher-level version of automatic difficulty adjustment features that have appeared in single-player games for decades. Simpler versions of this mechanic have appeared in the likes of Crash Bandicoot and newer Super Mario games.
This EA research-driven take worked, according to the paper, by analyzing and auto-adjusting games of a mobile, EA-published match-three puzzle game. The paper wanted to see whether automatic adjustments would keep players engaged instead of churning away out of frustration or dissatisfaction. (The unnamed game in question could be a version of Bejeweled, the biggest match-three series made by EA-owned studio PopCap.)
The paper’s opening abstract could have settled on simply saying that its preliminary DDA system netted a nine-percent “improvement in player engagement,” but the researchers chose to attach an economic model to the findings: that the DDA system had a “neutral impact on monetization.” (Certain free-to-play versions of Bejeweled allow players to spend real money to earn a performance-boosting “coins” currency faster.) The researchers go on to speculate that this was because its algorithms retained players that have a high risk of churn but who are also “less likely to spend [money].”
Coincidentally, the paper’s conclusion mentions a desire to expand DDA testing to “more complicated games with non-linear or multiple progressions, such as role-playing games (RPGs).” We’d also like to see further research to show whether games with more robust online communities or social features, such as online score comparisons, might influence higher-spending “whale” players to spend more, or at least attract more likely whales.
Coming soon? Already here?
Separately, the papers analyze retention methods that, as described, have not been disclosed to players—unlike the clearly marked boosts and aids in newer Super Mario games and the “safe mode” added to horror game Soma. It’s unclear whether EA would actively inform players of these kinds of systems, should they be employed in either single-player or multiplayer games, or whether they’ve already arrived unannounced in EA-published games that launched after these early-2016 tests.
Meanwhile, EA has two big games on the horizon that may marry the single-player challenge tweaks of the DDA study and the matchmaking-driven augmentations of the EOMM one. In addition to Bioware’s upcoming Anthem, an apparent space-combat co-op RPG that looks similar to Destiny, EA recently announced sweeping changes to an unnamed Star Wars game. Those changes should add “a broader experience that allows for more variety and player agency,” which suggests a switch from its original single-player-only vision to a shared-multiplayer one. This 2017 research strongly suggests that EA has a keen interest in applying these methodologies to its future games, but how these single-player and multiplayer systems might combine to quietly and simultaneously manipulate a game’s playerbase is not yet clear.
EA did not immediately respond to Ars’s questions about the studies.
YouTube says it is looking into “further consequences” for Logan Paul, the user who recently posted a video showing a dead body.
Paul was widely criticized for making the video in Japan’s so-called “suicide forest.” Amid widespread criticism, Paul took the video down and repeatedly apologized.
YouTube also came under scrutiny for not taking action. That’s why the company issued a statement on Tuesday, one week after the controversy erupted.
“It’s taken us a long time to respond, but we’ve been listening to everything you’ve been saying,” the company said in a series of tweets. “We know that the actions of one creator can affect the entire community, so we’ll have more to share soon on steps we’re taking to ensure a video like this is never circulated again.”
The statement hinted at potential changes to YouTube’s policies and algorithms.
YouTube’s upload-anything-anytime ethos is constantly being challenged by the posting of videos containing inappropriate content and even depicting potentially illegal conduct. The company has at times struggled to enforce its policies prohibiting violent and gory videos.
“If a video is graphic, it can only remain on the site when supported by appropriate educational or documentary information and in some cases it will be age-gated,” YouTube told CNN Tech when the Paul video first garnered attention.
Within the community of prominent YouTube creators, many of whom make a living by making videos, there is curiosity and concern about what YouTube might do next.
In Tuesday’s statement, the company acknowledged that “many of you have been frustrated with our lack of communication recently.”
“You’re right to be,” the company said. “You deserve to know what’s going on.”
Referencing Paul’s video without naming him, YouTube said “suicide is not a joke, nor should it ever be a driving force for views.”
“We expect more of the creators who build their community on YouTube, as we’re sure you do too,” the company said. “The channel violated our community guidelines, we acted accordingly, and we are looking at further consequences.”
It is unclear what the company meant by saying “we acted accordingly.” In fact, the offending video was taken down by Paul, not by the company.
Before we go too deep into this, watch the video above. It’s oddly hilarious. A squad of little autonomous Segway-like robots milling about an office park, keeping track of cars and heroically fending off a hooded would-be burgler while an equally gallant drone provides aerial support and a human security officer manages things (seemingly sans doughnut) via a smartphone. The inspiring action music really makes it.
Now, what the heck is this all about? Security robots (which are a thing, by the way) typically have to operate by themselves, which can be a problem when intruders get pushy. Turing Video has a simple answer to this, however: give human security officers a lift. It just premiered a security robot, Nimbo, whose Segway-based design includes a unique “Ride-On Mode” that lets a passenger hop on and travel at up to 11 miles per hour. The bot is designed to autonomously patrol areas and deliver audiovisual warnings if it catches a trespasser with its computer vision (based on tech like Intel RealSense), but this helps its organic counterparts respond to alerts or supplement the machine’s own coverage.
The Segway underpinnings also help it traverse areas that other robots might not handle. It can cross rough pavement and speed bumps, and its relatively narrow body (25 inches across) can help it squeak into narrow passageways. Also, Nimbo can operate around the clock: it can dock at automatic charging stations and produce non-stop video.
Turing hasn’t said how much Nimbo costs (we’ve asked about it), but that’s likely to depend on specific needs. The automaton can be customized to tie into existing security systems, talk to drones or carry additional sensors, so the base model definitely isn’t the only option. As such, don’t be surprised if you eventually see these machines guarding everything from the local parking complex to your corporate campus.