For some reason that I still cannot figure, all of the brands that are keeping Google’s Wear OS alive, keep announcing new smartwatches. Typically, announcing new products is a good thing, but you have to understand that Wear OS is about to get a brand new chipset to power its devices that could dramatically change the platform and revive it from years of sleep. In other words, you shouldn’t buy any of these recently announced Wear OS watches. I’d even go as far as to suggest you skip the Samsung Galaxy Watch for now too if that was on your radar.
On September 10, Qualcomm is hosting an event in San Francisco where they will announce a new wearable chipset that will more than likely be in all future Wear OS watches. This new chipset is said to be built from the ground up, will allow watches to look pretty when you aren’t using them (like a normal watch sitting idly by your side), and extend battery life.” More importantly, Qualcomm is betting that this Snapdragon Wear chip will “significantly change the Wear OS ecosystem, what you expect from a smartwatch.”
If you buy a smartwatch today, before Qualcomm announces this chip, you will be stuck with a 2+ year old Snapdragon Wear 2100 chip. All of the new Wear OS watches that have been announced recently, use that chip. It’s old. It’s never been great. And it’s about to be replaced by something potentially game-changing for smartwatches.
We think that LG will be one of the first with a watch running this new Qualcomm processor. Back prior to Google I/O, a report surfaced suggesting that LG had some sort of hybrid watch in the works. This watch sounds exactly like what Qualcomm described when it teased its new processor and said a “lead” watch was coming in the fall. This LG watch is said to have physical watch hands, as well as the smarts of Wear OS and a touch display. My guess is that we will see it on September 10.
After that watch shows up, all other major Wear OS watches will run the new Snapdragon Wear. Google’s rumored Pixel Watch is almost guaranteed to, as are others that show up into the holiday season.
While there is no sure bet when it comes to a Wear OS revival, this is the most exciting watch-related happening we’ve had on Android in years. Do not buy a smartwatch today or next week or the following week. Wait until we see what Qualcomm has in store.
via Droid Life: A Droid Community Blog https://ift.tt/2dLq79c
It’s estimated that 40% of greenhouse gas emissions come from agriculture, and a substantial portion of that is directly ’emitted’ by livestock. And just last year, climate scientists reported that we’ve actually been underestimating the extent to which the combined belches and flatulence of farmed animals contributes to climate change by 11%. Unsurprisingly, there’s been renewed interest in reducing those emissions, especially considering the demand for livestock is only growing. Now, scien
Across multiple years and multiplefranchises, Sony has uniformly prevented PS4 games from playing nicely online with versions of the same game on other consoles. Now, Bethesda is warning that such cross-platform support is “non-negotiable” for the coming console versions of The Elder Scrolls: Legends collectible card game, potentially barring the title from Sony’s system.
In an interview with Game Informer, Bethesda VP Pete Hines says that any and all versions of The Elder Scrolls: Legends need to allow for full, unrestricted cross-platform play and cross-platform progress. The applies to the current versions—which already work seamlessly across iOS, Android, and PC platforms—as well as previously announced console versions planned for “later this year.”
“It is our intention in order for the game to come out, it has to [have full cross-platform support] on any system,” Hines said. “We cannot have a game that works one way across everywhere else except for on this one thing.”
“Those [terms] are essentially non-negotiable,” Hines continued later in the interview. “We can’t be talking about one version of Legends, where you take your progress with you, and another version where you stay within that ecosystem or its walled off from everything else. That is counter to what the game has been about.”
Playing hardball?
While Hines didn’t name Sony specifically, this cross-platform position would seem to be an obvious sticking point for the previously announced PS4 version of the game. Hines told Game Informer that Bethesda “continue[s] to talk to all our platform partners” about this issue, but Sony has seemed particularly unmovable on the subject thus far. Still, this kind of “non-negotiable” promise (threat?) from a major publisher is just the kind of thing that could potentially get Sony to make some movement.
For all the complaining from players, developers, and publishers, you could argue that Sony hasn’t felt much direct pain as a result of its anti-cross-console policy so far. Major multiplayer games including Minecraft, Fortnite, and Rocket League continue to be made and sold to the PS4’s 82-million-plus potential players, and Sony continues to rake in licensing fees from those sales.
As we’ve discussed previously on Ars, Sony’s walled-garden approach arguably makes some sense as a way to exploit the competitive advantage of its larger console player base. Locking out other console networks potentially discourages individual, “locked in” players from moving to competing consoles where they’d be cut off from their friends.
The easiest way for that competitive calculus to change is for publishers to start refusing to release their cross-platform online games on the PS4, as Bethesda is doing here. At that point, Sony has to start balancing the potential competitive advantage of player lock-in against the lost licensing fees from the game (or games) that are skipping the PS4. While one Sony executive recently said he was “confident” a cross-platform “solution” was coming in the future, this is the kind of thing that could make it come that much faster.
Before you start casting Bethesda as a bold, uncompromising hero in this cross-platform drama, though, it’s important to remember that the company has no such non-negotiable position for its other major online console release of the year: Fallout 76. On the contrary, for that game, Bethesda’s Todd Howard was quite explicit in telling German site GameStar that the company would love to have cross-platform support but simply dropped it because “Sony is not as helpful as everyone would like.” Fallout 76 is still planned for a PS4 release alongside other platforms this November.
Potentially losing a spin-off Elder Scrolls card game is one thing. But if Bethesda forced Sony to choose between blocking cross-console play and getting the next major Fallout game on its platform, that could definitely get the executives talking.
Apple works hard to make its software secure. Beyond primary protections that prevent malware infections in the first place, company engineers also build a variety of defense-in-depth measures that are designed to lessen the damage that can happen once a Mac is compromised. Now, a former National Security Agency hacker and macOS security expert has exposed a major shortcoming in one such measure.
The measure presents a confirmation window that requires users to click an OK button before an installed app can access geolocation, contacts, or calendar information stored on the Mac. Apple engineers added the requirement to act as a secondary safeguard. Even if a machine was infected by malware, the thinking went, the malicious app wouldn’t be able to copy this sensitive data without the owner’s explicit permission.
In a presentation at the Defcon hacker convention in Las Vegas over the weekend, Wardle said it was trivial for malware to bypass the warnings by using a programming interface built into macOS to simply click the OK button. The bypass requires only a few lines of extra code. This “synthetic click,” as Wardle called it, works almost immediately and can be done in a way that prevents an end user from seeing the warning.
“The ability to synthetically interact with a myriad of security prompts allows you to do a lot of malicious stuff,” Wardle told Ars. “This privacy and security-in-depth protection can be easily bypassed.”
Unexpected OK
The synthetic clicks are produced by using a macOS interface that converts keyboard key presses into mouse movements. Mouse keys, as the interface is known, lets a user move a mouse up, down, to the right or left, or in diagonal directions by pressing certain keys as diagrammed below:
To Wardle’s amazement, he found by accident that when presenting the alerts, macOS interprets the sending of two mouse-down events as an OK. As a result, he is able to create code that completely bypasses the warnings when doing a variety of things that have serious security and privacy consequences. The bypass works against warnings that protect the accessing of geolocation, contacts, and calendar entries. It also works against warnings displayed when apps want to install “kexts,” which are kernel extensions that interact with the core of the macOS.
Apple responded to these in-the-wild wares by making the alerts harder to bypass, but Wardle’s finding exposes a major flaw in that work. The mouse-keys bypass doesn’t work against all warnings. Alerts displayed when malware tries to access the Mac keychain, for instance, still require a user to enter a password. But for reasons that aren’t clear, alerts for kexts and for accessing geolocation, contacts, and calendars are easy to get past. The upcoming Mojave, Wardle said, blocks his bypass, but the change will come at the cost of usability for some users. Representatives from Apple didn’t respond to an email seeking comment for this post.
Wardle, for his part, said the bypass raises questions about how the company rolled out the improvements. “I wasn’t trying to find a bypass, but I uncovered a way to fully break a foundational security mechanism,” said Wardle, who is the developer of the Objective-See Mac tools and Chief Research Officer at Digita Security. “If a security mechanism falls over so easily, did they not test this? I’m almost embarrassed to talk about it.”
At the heart of Einstein’s theory of gravity (general relativity) is the equivalence principle. The equivalence principle says that there is no difference between being stationary and subject to gravity tugging you versus accelerating in a vehicle that’s free of gravitational pull.
In practice, this means that there is no difference between inertial mass (the mass a rocket works on) and gravitational mass (the mass the Earth tugs on). This equivalence has been measured time and time again with no violation ever found. But, these tests assumed that quantum mechanics didn’t change the equivalent principle: that assumption is partially wrong.
Some quantum in your equivalence
In relativity, mass and energy are two sides of the same coin. For very small objects, we need to think about that in terms of quantum mechanics, where a particle can be in a superposition of energy states. A particle in a superposition of energy states has two energies at the same time until it is measured, whereupon it has a single fixed energy. An object in a superposition of energetic states can have a superposition of inertial masses. But does it have the same superposition of gravitational masses?
The intent of the equivalence principle says yes, it should. But the mathematical statement of the equivalence principle takes no account of the quantum properties of the objects.
Now a pair of researchers have picked up that thread and started pulling on it. They have re-formulated the equivalence principle so that it takes into account the way energy may be distributed internally in a quantum object.
Their conclusion is that, while the classical equivalence principle requires that the classical inertial mass and gravitational mass are the same, this is not enough for quantum mechanics. And now we get a bit technical. In a quantum system, the researchers also found that the inertial mass and gravitational mass operators must commute. What does that mean?
Commutation
In terms of physics, when two operators commute, it means that we can measure the physical quantity of one and not disturb the value of the other. To provide the most famous example, position and momentum do not commute. If we measure the position of an electron, we lose information about the momentum. If we then measure the momentum of the same electron, we will lose information about its position. The same is not true for momentum and energy. If I measure the momentum and then measure the energy, I do not lose information about the momentum.
In a sense the statement that inertial mass and gravitational mass must commute is trivial: if the inertial mass and the gravitational mass are the same, then they have the same operators and they must commute. If that were not true, it would be the equivalent of saying that measuring the momentum of an electron destroys knowledge about the momentum of the electron. That does not make sense.
Likewise, measuring the mass of a particle does not destroy knowledge about the mass of the particle. However, if inertial mass and gravitational mass are different, then measuring the inertial mass makes the gravitational mass uncertain.
The consequence is that classical tests of the equivalent principle may find agreement when, in fact, the test violates the equivalence principle. Additional measurements are required to confirm that equivalence holds for quantum objects.
Sensitive to a difference in mass
Let’s take an example. Physicists sometimes use a Bose Einstein condensate (BEC) to test the equivalence principle. A BEC is a blob of atoms that acts like a single quantum particle. The blob is split in two equal parts and sent along two different paths to meet up again. In one path, the BEC blob is put into a superposition state: the BEC is in two energetic states simultaneously. Gravity acts on both blobs, but its effect should be different because one blob has a different internal state.
When the two blobs meet, they interfere, resulting in patches of material that create bright and dark areas on a screen. If everything goes perfectly and the equivalence principle holds, then the dark patches are completely dark and the bright patches are all equally bright.
If inertial mass and gravitational mass are different, then the interference will not be perfect. The bright patches will not be as bright, and the dark patches will have some light.
There are similar differences for a variety of different quantum tests of the quantum equivalence principle; the others are very difficult. The researchers examined four different experiments and found that for three of them, current and near-future experiments would not be sensitive enough. For the one remaining method, an earlier experiment had proven the viability of the method, and had found that the equivalence principle held.
Not all equivalence principles are equivalent
I should note that I have been skating over a lot of technicalities here. In particular the equivalence principle can be split into a combination of the weak equivalence principle, local Lorentz invariance, and local position invariance. Together, these three make up Einstein’s equivalence principle. Typically, most experiments only test a subset of these three (and usually only the weak equivalence principle).
Here, however, the researchers are dealing with all three. If experiments can be improved to the point that they can perform the measurements suggested in this paper, then it will represent the strongest test for Einstein’s equivalence principle yet.
A US appeals court has blocked the Federal Communications Commission’s attempt to take a broadband subsidy away from Tribal areas.
The FCC decision, originally slated to take effect later this year, would have made it difficult or impossible for Tribal residents to obtain a $25-per-month Lifeline subsidy that reduces the cost of Internet or phone service for poor people. But on Friday, a court stayed the FCC decision pending appeal, saying that Tribal organizations and small wireless carriers are likely to win their case against the commission.
“Petitioners have demonstrated a likelihood of success on the merits of their arguments that the facilities-based and rural areas limitations contained in the Order are arbitrary and capricious,” said the stay order issued by the US Court of Appeals for the District of Columbia Circuit. “In particular, petitioners contend that the Federal Communications Commission failed to account for a lack of alternative service providers for many tribal customers.”
The tribes and small carriers that sued the FCC “have shown a substantial risk that tribal populations will suffer widespread loss of vital telecommunications services absent a stay,” the court said. The FCC hasn’t proven that its plan won’t result in “mass disconnection,” the court also said.
The court ruling was welcomed by the Crow Creek Sioux Tribe and Oceti Sakowin Tribal Utility Authority, which are among the groups suing the FCC. Several small carriers and the non-profit National Lifeline Association are also plaintiffs in the lawsuit.
“Our people have long suffered from flawed federal government policies and actions, so the Court’s decision… is an important step in righting past injustices and allowing residents of Tribal lands to obtain critical Lifeline services,” Oceti Sakowin Tribal Utility Authority Executive Director Joe RedCloud said in a statement distributed to media.
FCC trying to kick resellers out of Lifeline
The FCC’s November 2017 vote would have eliminated the $25 subsidy entirely for Tribal residents who live in urban areas, leaving them with just the basic $9.25-per-month Lifeline subsidy. The FCC claimed that the extra $25 subsidy isn’t required to make service affordable in urban settings.
In rural areas, the FCC vote would have barred Tribal residents from using the $25 subsidy to buy telecom service from resellers. Because most Lifeline users buy from resellers instead of carriers that build and operate their own networks, this move would have dramatically limited rural Tribal residents’ options for purchasing subsidized service. Resellers are often the primary providers of Lifeline service in Tribal lands and are sometimes the only Lifeline providers in those areas, Crow Creek Sioux Tribe attorney Gene DeJordy said.
The court’s stay blocks implementation of both those changes, allowing the $25 subsidy to remain available in urban areas and from resellers.
In a related but not-yet-finalized move, FCC Chairman Ajit Pai has proposed kicking resellers out of the Lifeline program throughout the US. This would limit availability of the basic $9.25 subsidy both in Tribal areas and in the rest of the US.
The tribes and small carriers that sued the FCC were able to show the court that the FCC’s Lifeline changes would result in people losing telecom service, the court’s stay order said. ”Petitioners have provided evidence that many tribal customers will lose access to vital telecommunications services under the Order’s new eligibility requirements, and the Order fails to meaningfully consider this effect,” the order said.
Despite trying to limit Lifeline access, Pai claims that his top priority as FCC chairman is “closing the digital divide and bringing the benefits of the Internet age to all Americans.”
Lifeline has more than 12 million subscribers and is paid for by Americans through fees imposed on phone bills.
No evidence to support FCC claim
Pai claims that limiting the use of subsidies to buy service from resellers will encourage carriers to build their own networks.
But the court said the FCC provided no compelling evidence to support that claim:
Furthermore, the Federal Communications Commission has not shown that the historical record supports its assertion that these new requirements will encourage development of communications infrastructure in underserved areas, thus preventing mass disconnection. On the contrary, petitioners credibly assert that providers have generally declined to offer Lifeline service in many tribal regions in the nearly two decades since the implementation of the Tribal Lifeline program, and furthermore that the Order’s new eligibility requirements do not attract providers to expand into those previously ignored regions.
Besides harming Tribal residents, the FCC decision could threaten the viability of small carriers, the court order said.
The small carriers “have shown that implementation of the Order will result in substantial, unrecoverable losses in revenue that may indeed threaten the future existence of their businesses,” the stay order said. “In addition, the tribal petitioners have shown that implementation of the Order is likely to result in a major reduction, or outright elimination, of critical telecommunications services for many tribal residents, which are vital for day-to-day medical, educational, family care, and other functions.”
Pai’s FCC opposed the stay in a court filing, writing that tribes and small carriers “have not shown that implementation of the Order will cause them irreparable harm. On the other hand, delaying the FCC’s Tribal Lifeline reforms will harm the public by wasting public funds on areas and providers that should not be eligible for enhanced subsidies.”
The FCC also said its plan will “limit the risk of waste, fraud and abuse of the Lifeline program.” But the court’s stay order said the FCC “identified no evidence of fraud or misuse of funds in the aspects of the program at issue here.”
Now that the stay order is in effect, the next step is for the court to schedule oral arguments.
Today at SIGGRAPH the Khronos Group, the industry consortium behind OpenGL and Vulkan, announced the ratification and public release of their Neural Network Exchange Format (NNEF), now finalized as the official 1.0 specification. Announced in 2016 and launched as a provisional spec in late 2017, NNEF is Khrono’s deep learning open format for neural network models, allowing device-agnostic deployment of common neural networks. And on a flashier note, StarVR and Microsoft are providing the first public demonstration of Khronos’ OpenXR, a cross-platform API standard for VR/AR hardware and software.
With a two-part approach, OpenXR’s goal is VR/AR interoperability encompassing both the application interface layer (e.g. Unity or Unreal) and the device layer (e.g. SteamVR, Samsung GearVR). In terms of SIGGRAPH’s showcare, Epic’s Showdown demo is being exhibited with StarVR and Windows Mixed Reality headsets (not Hololens) through OpenXR runtimes, via an Unreal Engine 4 plugin. Given the amount of pre-existing proprietary APIs, the original OpenXR iterations were actually developed in-line with them to such an extent that Khronos considered the current OpenXR more like a version 2.0.
As for NNEF, the key context is one of the side effects of the modern deep learning (DL) boom: the vast array of valid DL frameworks and toolchains. In addition to those frameworks, we’ve seen that any and all pieces of silicon have been pressed into action as an AI accelerator: GPUs, CPUs, SoCs, FPGAs, ASICs, and even more exotic fare. To recap briefly, after a neural network model is developed and finalized by training on more powerful hardware, it is then deployed for use on typically less powerful edge devices.
For many companies, the amount of incompatible choices makes ‘porting’ much more difficult between any given training framework and any given inferencing engine, especially as companies implement more and more specialized hardware, datatypes, and weights. With NNEF, the goal is providing an exchange format that allows any given training framework to be deployed onto any given inferencing engine, without sacrificing specialized implementations.
Today’s ratification and final release is more of a ‘hard launch’ with NNEF ecosystem tools now available on GitHub. When NNEF 1.0 was first launched as a provisional specification, the idea was to garner industry feedback, and after those changes NNEF 1.0 has been released as an official standard. In that sense, while both initiatives are open-source, NNEF differs from the similar Open Neural Network Exchange (ONNX) that was started by Facebook and Microsoft, which is organized as a open-source project. And where ONNX might focus on interchange between training formats, NNEF continues to be designed for variable deployment.
Current tools and support for the standard include two open source TensorFlow converters for protobuf and Python network descriptors, as well as a Caffe converter. Khronos also notes tool development efforts from other groups: an open source Caffe2 converter by Au-Zone Technologies (due Q3 2018), various tools by Almotive and AMD, and an Android NN API importer by a team at National Tsing-Hua University of Taiwan. More information on the final NNEF 1.0 spec can be find on its main page, including the full open specification on Khronos’ registry.
Also announced was the Khronos Education Forum, spurred by increasing adoption and learning of Vulkan and other Khronos standards/APIs, the former of course not known for a gentle learning curve. One of the more interesting tidbits of this initiative is access to the members of the various Khronos Working Groups, meaning that educators and students will get guidance from the very people who designed a given specification.