Continuous Glucose Monitors (CGM), Proven Cost-Effective, Add to Quality of Life for Diabetics

Continuous Glucose Monitors (CGM), Proven Cost-Effective, Add to Quality of Life for Diabetics

https://ift.tt/2HyHlF6

University of Chicago Medicine (Chicago, IL)

Continuous glucose monitors (CGM) offer significant, daily benefits to people with type 1 diabetes, providing near-real time measurements of blood sugar levels, but they can be expensive. A new study by researchers from the University of Chicago Medicine, based on a six-month clinical trial, finds that use of a CGM is cost-effective for adult patients with type 1 diabetes when compared to daily use of test strips. The results are well within the thresholds normally used by insurance plans to cover medical devices. During the trial, CGMs improved overall blood glucose control for the study group and reduced hypoglycemia — low blood sugar episodes.

The study also simulated the costs and health effects of CGM use over the expected lifetime of patients. It showed that CGMs increased quality of life by extending the amount of time patients enjoy relatively good health, free of complications.

Smartwatch app monitors and records blood sugar levels throughout the day. (Image courtesy of University of Chicago Medicine)

The monitor uses a tiny sensor inserted under the skin to test blood sugar levels every few minutes throughout the day, which it wirelessly sends to a monitor. The first generation of CGMs transmitted data to a stand-alone electronic device that looks like a pager, but newer models can work with apps on smart-phones and smartwatches. This provides near-real time information and allows diabetics to adjust their physical activity, food intake, or insulin levels quickly, preventing severe high or low blood sugar episodes.

The researchers also used a statistical model to simulate costs and health effects of CGM use over the average expected lifetimes of patients. The model calculated a value called quality-adjusted life years (QALYs) for each patient, which represents the amount of time they live free of any complications or serious medical incidents. In the lifetime analysis, the CGM was projected to reduce the risk of complications from type 1 diabetes and increase QALYs by 0.54, basically adding six months of good health.

The analysis calculated an incremental cost-effectiveness ratio, which shows the difference in costs for a treatment, in this case the CGM vs daily test strips, over the health benefit it adds — the quality of life years. The cost-effectiveness ratio for the CGM was about 0,000 per QALY for the overall population. This is well below the threshold insurance plans and government agencies such as Medicare normally use to decide whether or not to cover a new treatment or medical device. The ratio was calculated based on the recommendation to use a CGM sensor for seven days, but if that use was extended to 10 days, as many people do, that ratio was reduced to about ,000 per QALY. For more information, contact This email address is being protected from spambots. You need JavaScript enabled to view it..

Tech

via NASA Tech Briefs https://www.techbriefs.com

May 31, 2018 at 11:15PM

Continuous Glucose Monitors (CGM), Proven Cost-Effective, Add to Quality of Life for Diabetics

Continuous Glucose Monitors (CGM), Proven Cost-Effective, Add to Quality of Life for Diabetics

https://ift.tt/2HyHlF6

University of Chicago Medicine (Chicago, IL)

Continuous glucose monitors (CGM) offer significant, daily benefits to people with type 1 diabetes, providing near-real time measurements of blood sugar levels, but they can be expensive. A new study by researchers from the University of Chicago Medicine, based on a six-month clinical trial, finds that use of a CGM is cost-effective for adult patients with type 1 diabetes when compared to daily use of test strips. The results are well within the thresholds normally used by insurance plans to cover medical devices. During the trial, CGMs improved overall blood glucose control for the study group and reduced hypoglycemia — low blood sugar episodes.

The study also simulated the costs and health effects of CGM use over the expected lifetime of patients. It showed that CGMs increased quality of life by extending the amount of time patients enjoy relatively good health, free of complications.

Smartwatch app monitors and records blood sugar levels throughout the day. (Image courtesy of University of Chicago Medicine)

The monitor uses a tiny sensor inserted under the skin to test blood sugar levels every few minutes throughout the day, which it wirelessly sends to a monitor. The first generation of CGMs transmitted data to a stand-alone electronic device that looks like a pager, but newer models can work with apps on smart-phones and smartwatches. This provides near-real time information and allows diabetics to adjust their physical activity, food intake, or insulin levels quickly, preventing severe high or low blood sugar episodes.

The researchers also used a statistical model to simulate costs and health effects of CGM use over the average expected lifetimes of patients. The model calculated a value called quality-adjusted life years (QALYs) for each patient, which represents the amount of time they live free of any complications or serious medical incidents. In the lifetime analysis, the CGM was projected to reduce the risk of complications from type 1 diabetes and increase QALYs by 0.54, basically adding six months of good health.

The analysis calculated an incremental cost-effectiveness ratio, which shows the difference in costs for a treatment, in this case the CGM vs daily test strips, over the health benefit it adds — the quality of life years. The cost-effectiveness ratio for the CGM was about 0,000 per QALY for the overall population. This is well below the threshold insurance plans and government agencies such as Medicare normally use to decide whether or not to cover a new treatment or medical device. The ratio was calculated based on the recommendation to use a CGM sensor for seven days, but if that use was extended to 10 days, as many people do, that ratio was reduced to about ,000 per QALY. For more information, contact This email address is being protected from spambots. You need JavaScript enabled to view it..

Tech

via NASA Tech Briefs https://www.techbriefs.com

May 31, 2018 at 11:15PM

Anonymous Chat Apps like Signal and Whatsapp Are Only as Private as the People You Talk To

Anonymous Chat Apps like Signal and Whatsapp Are Only as Private as the People You Talk To

https://ift.tt/2l279B3

In the show I’m in right now, there’s an scene when the less-than-pleasant Archdeacon of the Notre Dame Cathedral, Claude Frollo, tells his adopted son, Quasimodo, that “it takes two people to communicate.” But it’s not just the hunchback that forgets this lesson—I’m surprised, but not that surprised, about how easy it is to ignore this fact in everyday life.

Consider the indictment that was just handed down by the U.S. District Court for the District of Columbia against James Wolfe, a former member of the Senate Intelligence Committee. The indictment alleges that Wolfe lied to federal investigators about his communications with three different reporters—sometimes done over “anonymous” messaging applications like Signal and WhatsApp.

Between in or around September 2017 and continuing until at least in or around December 2017, REPORTER #3 and WOLFE regularly communicated with each other using the anonymizing messaging application Signal, text messages, and telephone calls.

“But wait,” you ask. “Aren’t these supposed to be secure messaging applications?”

Yes. Absolutely. Neither Signal nor WhatsApp can read the messages you send across their respective services, thanks to the encryption mechanisms built into the entire process. If either service is subpoenaed, they can’t help investigators recover messages at all (presumably), a much better situation for you and your clandestine practices than if you just straight-up text messaged your spy network over a wireless carrier’s network. (Don’t do that.)

Here’s the problem, though: If you’re dumb and you leave your messages on your device instead of deleting them, investigators can find them if they obtain physical access to your tablet or smartphone. The people you were talking to? Same deal.

And don’t forget: The party you’re “securely messaging” can also take screenshots of your conversations. If the app you’re using doesn’t warn you when this happens, a la Snapchat, or have some built-in mechanism to prevent screenshots, you’re stuck. Even if it does, a craftier person can just take out a secondary device and take a photo of the screen with your identifying information in it.

What’s a wannabe-spy to do? On Signal, consider using the service’s disappearing messages feature. Though even Signal itself notes that this won’t stop someone from taking a picture of what you sent, surely you can use other tools at your disposal—like burner phone numbers—to cleverly conceal who you are.

(Or just make a dummy email account, connect to a trustworthy VPN, fire up Tor, and send your secret, encrypted message that way—which is just an “off the top of my head” privacy suggestion. There are plenty of more advanced techniques, like dropping secret information into a SecureDrop, for example.)

On WhatsApp, “disappearing messages” are a more manual technique. You have to delete your messages yourself, which removes them from both your device and your recipient’s device. Don’t dawdle, though; you only have about an hour or so to eliminate what you’ve sent.

As before, remember that information about you can still be subpoenaed, including what numbers your number has contacted, so you might want to get creative about how you sign up for WhatsApp in the first place if you’re looking to stay as anonymous as possible. And, yes, that’s “anonymous as possible,” since it never feels like you’re truly anonymous when you’re using someone else’s service.

Tech

via Lifehacker http://lifehacker.com

June 8, 2018 at 04:13PM

Anonymous Chat Apps like Signal and Whatsapp Are Only as Private as the People You Talk To

Anonymous Chat Apps like Signal and Whatsapp Are Only as Private as the People You Talk To

https://ift.tt/2l279B3

In the show I’m in right now, there’s an scene when the less-than-pleasant Archdeacon of the Notre Dame Cathedral, Claude Frollo, tells his adopted son, Quasimodo, that “it takes two people to communicate.” But it’s not just the hunchback that forgets this lesson—I’m surprised, but not that surprised, about how easy it is to ignore this fact in everyday life.

Consider the indictment that was just handed down by the U.S. District Court for the District of Columbia against James Wolfe, a former member of the Senate Intelligence Committee. The indictment alleges that Wolfe lied to federal investigators about his communications with three different reporters—sometimes done over “anonymous” messaging applications like Signal and WhatsApp.

Between in or around September 2017 and continuing until at least in or around December 2017, REPORTER #3 and WOLFE regularly communicated with each other using the anonymizing messaging application Signal, text messages, and telephone calls.

“But wait,” you ask. “Aren’t these supposed to be secure messaging applications?”

Yes. Absolutely. Neither Signal nor WhatsApp can read the messages you send across their respective services, thanks to the encryption mechanisms built into the entire process. If either service is subpoenaed, they can’t help investigators recover messages at all (presumably), a much better situation for you and your clandestine practices than if you just straight-up text messaged your spy network over a wireless carrier’s network. (Don’t do that.)

Here’s the problem, though: If you’re dumb and you leave your messages on your device instead of deleting them, investigators can find them if they obtain physical access to your tablet or smartphone. The people you were talking to? Same deal.

And don’t forget: The party you’re “securely messaging” can also take screenshots of your conversations. If the app you’re using doesn’t warn you when this happens, a la Snapchat, or have some built-in mechanism to prevent screenshots, you’re stuck. Even if it does, a craftier person can just take out a secondary device and take a photo of the screen with your identifying information in it.

What’s a wannabe-spy to do? On Signal, consider using the service’s disappearing messages feature. Though even Signal itself notes that this won’t stop someone from taking a picture of what you sent, surely you can use other tools at your disposal—like burner phone numbers—to cleverly conceal who you are.

(Or just make a dummy email account, connect to a trustworthy VPN, fire up Tor, and send your secret, encrypted message that way—which is just an “off the top of my head” privacy suggestion. There are plenty of more advanced techniques, like dropping secret information into a SecureDrop, for example.)

On WhatsApp, “disappearing messages” are a more manual technique. You have to delete your messages yourself, which removes them from both your device and your recipient’s device. Don’t dawdle, though; you only have about an hour or so to eliminate what you’ve sent.

As before, remember that information about you can still be subpoenaed, including what numbers your number has contacted, so you might want to get creative about how you sign up for WhatsApp in the first place if you’re looking to stay as anonymous as possible. And, yes, that’s “anonymous as possible,” since it never feels like you’re truly anonymous when you’re using someone else’s service.

Tech

via Lifehacker http://lifehacker.com

June 8, 2018 at 04:13PM

Anonymous Chat Apps like Signal and Whatsapp Are Only as Private as the People You Talk To

Anonymous Chat Apps like Signal and Whatsapp Are Only as Private as the People You Talk To

https://ift.tt/2l279B3

In the show I’m in right now, there’s an scene when the less-than-pleasant Archdeacon of the Notre Dame Cathedral, Claude Frollo, tells his adopted son, Quasimodo, that “it takes two people to communicate.” But it’s not just the hunchback that forgets this lesson—I’m surprised, but not that surprised, about how easy it is to ignore this fact in everyday life.

Consider the indictment that was just handed down by the U.S. District Court for the District of Columbia against James Wolfe, a former member of the Senate Intelligence Committee. The indictment alleges that Wolfe lied to federal investigators about his communications with three different reporters—sometimes done over “anonymous” messaging applications like Signal and WhatsApp.

Between in or around September 2017 and continuing until at least in or around December 2017, REPORTER #3 and WOLFE regularly communicated with each other using the anonymizing messaging application Signal, text messages, and telephone calls.

“But wait,” you ask. “Aren’t these supposed to be secure messaging applications?”

Yes. Absolutely. Neither Signal nor WhatsApp can read the messages you send across their respective services, thanks to the encryption mechanisms built into the entire process. If either service is subpoenaed, they can’t help investigators recover messages at all (presumably), a much better situation for you and your clandestine practices than if you just straight-up text messaged your spy network over a wireless carrier’s network. (Don’t do that.)

Here’s the problem, though: If you’re dumb and you leave your messages on your device instead of deleting them, investigators can find them if they obtain physical access to your tablet or smartphone. The people you were talking to? Same deal.

And don’t forget: The party you’re “securely messaging” can also take screenshots of your conversations. If the app you’re using doesn’t warn you when this happens, a la Snapchat, or have some built-in mechanism to prevent screenshots, you’re stuck. Even if it does, a craftier person can just take out a secondary device and take a photo of the screen with your identifying information in it.

What’s a wannabe-spy to do? On Signal, consider using the service’s disappearing messages feature. Though even Signal itself notes that this won’t stop someone from taking a picture of what you sent, surely you can use other tools at your disposal—like burner phone numbers—to cleverly conceal who you are.

(Or just make a dummy email account, connect to a trustworthy VPN, fire up Tor, and send your secret, encrypted message that way—which is just an “off the top of my head” privacy suggestion. There are plenty of more advanced techniques, like dropping secret information into a SecureDrop, for example.)

On WhatsApp, “disappearing messages” are a more manual technique. You have to delete your messages yourself, which removes them from both your device and your recipient’s device. Don’t dawdle, though; you only have about an hour or so to eliminate what you’ve sent.

As before, remember that information about you can still be subpoenaed, including what numbers your number has contacted, so you might want to get creative about how you sign up for WhatsApp in the first place if you’re looking to stay as anonymous as possible. And, yes, that’s “anonymous as possible,” since it never feels like you’re truly anonymous when you’re using someone else’s service.

Tech

via Lifehacker http://lifehacker.com

June 8, 2018 at 04:13PM

Facebook Is Trying to Kill Its New Privacy Scandal on a Technicality

Facebook Is Trying to Kill Its New Privacy Scandal on a Technicality

https://ift.tt/2kRm6G1

Ever since the Cambridge Analytica scandal first broke in March, Facebook has been scrambling to change its policies and reassure the public that it no longer recklessly shares data with third parties. But on Sunday we learned that it has quietly been giving device makers access to users data this whole time. It argues this was different for several reasons and device makers could only use the data to provide “the Facebook experience.”

The New York Times reports that Facebook has maintained data-sharing agreements with “at least 60 device makers” for the last 10 years. Each partner was given access to a private API that allowed at least some partners to access more than 50 types of information about an individual user. It also extended that information to a users’ friends—and friends of friends—with the kind of wide net that famously resulted in a massive leak of millions of users’ data to Cambridge Analytica, a firm working for the 2016 presidential campaign to elect Donald Trump.

Throughout its all-hands-on-deck PR campaign over the last few months, Facebook has shuffled its data policies and reassured lawmakers that handing out user data to anyone who asked is no longer part of its standard operations. It maintained that its policies for sharing data with third-party app developers have changed following a consent decree with the FTC in 2011 over a previous privacy scandal.

Since it was revealed that millions of users’ data was gathered by a quiz app and subsequently sold to Cambridge Analytica, it’s been a lingering question if Facebook may have violated its consent decree by not notifying users of the breach. If the FTC determined that the social media network was in violation of the agreement, it could face “trillions of dollars” in fines. But Facebook is claiming that it’s fully in compliance with the consent decree due to its interpretation of one clause in the agreement.

The Times found that Facebook has given device makers like Apple, Amazon, BlackBerry, Microsoft, and Samsung extensive access to user data through partnerships that appear uncannily similar to its past third-party policies. These partnerships are reportedly ongoing. Facebook confirmed much of the Times report but took issue with some interpretations and asserted that it has been “winding down” the data-sharing program since *ahem* April.

To illustrate how the program works, the Times used a Blackberry device from 2013 to access one of its reporter’s Facebook accounts. From the report:

Immediately after the reporter connected the device to his Facebook account, it requested some of his profile data, including user ID, name, picture, “about” information, location, email and cellphone number. The device then retrieved the reporter’s private messages and the responses to them, along with the name and user ID of each person with whom he was communicating.

The data flowed to a BlackBerry app known as the Hub, which was designed to let BlackBerry users view all of their messages and social media accounts in one place.

The Hub also requested — and received — data that Facebook’s policy appears to prohibit. Since 2015, Facebook has said that apps can request only the names of friends using the same app. But the BlackBerry app had access to all of the reporter’s Facebook friends and, for most of them, returned information such as user ID, birthday, work and education history and whether they were currently online.

The reporter used in the test only had 550 friends, but when Facebook’s system was done combing through all of the information that its system allowed to be shared, “identifying information for nearly 295,000 Facebook users” was transmitted to the Blackberry Hub, the Times reported.

Facebook has clarified that its partners receive user information of the people you choose to share content with. A spokesperson explained to Gizmodo:

A great way to think about that is, just like when you see your timeline. If you and I are friends, and I post on my timeline, and one of my friends comments on it, you’re still going to be able to see that friend’s comment, and that’s just the nature of sharing on Facebook.

Unidentified officials told the Times that agreements with device makers include strict prohibitions on data usage that go beyond the rules applied to app developers. Developers have been given various levels of freedom to use personal data to build new products, but device makers have only been allowed to use data as necessary to provide “the Facebook experience.”

In a follow-up post on its newsroom blog, Facebook explained that these agreements first began as a way to more quickly integrate Facebook’s features across the wide range of devices on the market. It claimed,“In the early days of mobile, the demand for Facebook outpaced our ability to build versions of the product that worked on every phone or operating system.” But that’s no longer an issue because, “now that iOS and Android are so popular, fewer people rely on these APIs to create bespoke Facebook experiences.”

Another way to interpret that is that Facebook didn’t have the resources to handle its rapid and unprecedented expansion. In order to get itself to 2 billion users and ingrain its system into every corner of the internet, it played loose with data so that other people could build out the platform. Today, it argues that handling the abuse of its platform around the world is difficult because so many different factors like language, cultural differences, and insufficiently advanced AI present limitations. CEO Mark Zuckerberg has recently been vocal about the fact that Facebook doesn’t even want to make hard decisions about the governance of its platform and floated the idea of creating some third-party “Supreme Court.”

When contacted by Gizmodo, a Facebook spokesperson emphasized that the private API approach has been common in the tech industry, especially in the early days of the mobile era—they cited YouTube as an example of an app that was initially included on iPhones but was originally developed by Apple. They acknowledged that the “winding down” was prompted by the “hard look” Facebook has taken with its data policies and said that technology has changed to the point that this sort of data sharing isn’t necessary. The spokesperson said that it’s ended its partnerships with 22 parties but gave no timeline on the rest. When asked for a list of all the partners, the spokesperson said they haven’t decided to share that information at this time.

The Times attempted to contact several major device makers about the partnerships. Apple confirmed it was part of the program, but it hasn’t had access to Facebook user data since September, the company said. A Blackberry spokesperson did not say that the company no longer participates in the program but emphasized it “did not collect or mine the Facebook data of our customers.” Microsoft claimed all data is stored locally on the user’s device. Amazon and Samsung declined to comment. With all of those big manufacturers accounted for, there’s still the question of who else is part of this program. Facebook has only said, “around 60 companies” have used the private API and that some of its partners did store user data on their own servers. I certainly can’t think of 60 device manufacturers I’d trust with my data. And considering the fact that Cambridge Analytica and Aleksandr Kogan—the professor who developed the quiz app that sucked up tens of millions of people’s data—allegedly violated Facebook’s terms of service and received no penalty, it’s not exactly reassuring to hear that these agreements had “strict” guidelines.

We’ve asked Facebook whether it audits the device makers it partners with for compliance with its guidelines, and a spokesperson said that hasn’t happened because they’ve never had any “issues” with the program. Facebook does claim it’s performed “spot checks” with dummy accounts to ensure that the proper data was being pulled. But the question of whether all user data was treated properly after it was transferred to a partner’s server remains unanswered.

As for its 2011 agreement with the FTC, Facebook has maintained that the Cambridge Analytica situation did not constitute a violation of the section that required users be notified and must give their permission before any data about them is shared. Its reasoning is that users gave their permission implicitly through their privacy settings. This time, it claims the partnerships with device makers doesn’t violate the consent decree because it allows Facebook to share data with “service providers” without obtaining further permissions. While “service providers” is intended to refer to services like cloud storage and credit card providers, Facebook is taking a broader interpretation.

We also asked if Facebook intends to provide lawmakers with detailed accounts of how these partnerships worked. In its meetings with Congress, European Parliament, and the German government, Facebook executives have largely omitted acknowledgment of this program. Documents submitted to German lawmakers only mentioned Blackberry as a partner in its private API and offered little detail on how the program worked. When asked if Facebook would volunteer the complete details of the program with lawmakers, a spokesperson declined to give a firm answer but said it would work with lawmakers on any questions they might have.

We’ve seen in multiple sessions with lawmakers on both sides of the Atlantic that Facebook tends to leave a lot of questions unanswered whenever someone manages to get them in a room. We’ve seen that it rarely opens up about programs that might be concerning for users until it’s forced to do so—and when it does, it withholds information until someone else makes it public. We’ve seen that whenever it seems to have gotten its act together, there’s always another program waiting to be uncovered. Loopholes and convoluted privacy agreements are used and abused until they’re exposed as legal fig leaves. We’ve seen that Facebook is intent on sharing and using information that isn’t knowingly handed over. We know Facebook is just too big to handle its massive responsibilities. Going through this Groundhog Day of violating trust, saying it’ll do better, scrambling behind the scenes, and avoiding straight answers is tiresome for users and deliberately difficult to unpack every single time. We really do need a shorthand for this repetitive process. Allow us to suggest: “The Facebook Experience.”

[New York Times, Facebook Newsroom]

Tech

via Gizmodo http://gizmodo.com

June 4, 2018 at 12:57PM

Group FaceTime Is Finally a Thing

Group FaceTime Is Finally a Thing

https://ift.tt/2svV8rA

In a move that will surely please parents and grandparents around the world, Apple has finally announced group FaceTime functionality for iOS. In iOS 12, you’ll finally be able to FaceTime with up to 31 of your closest friends.

The major appeal of group FaceTime for me comes with its simplified integration with iMessage. There are a few group chats on my phone that are constantly updated/active among family and friends, and being able to quickly turn one of them in a FaceTime session seems like it’ll be extremely convenient. For special occasions (like my yearly fantasy football draft), rather than trying to wrangle everyone into a G-Hangout or Google Hang or whatever they’re called now, this seems like a much more simplified solution.

Apple will also let you put Animoji and other weird photo filters over your face when you’re FaceTiming with lots of people, and that doesn’t really matter so much unless you’re really sick and need to be on camera or something.

Group FaceTime is the type of thing that won’t seem useful until the option to start one pops up on your phone and you think, “hey, I actually would like to do that right now, thanks Apple.” I won’t be FaceTiming my family five times daily, but it’s a nice comfort to know that the option is there when the opportunity arises.

Now all that’s left to do is wait for your mom and dad to put you in a group FaceTime with them and your Uncle Bill while they’re tailgating at the Mets game. Whether you pick up, however, is another question entirely.

Tech

via Gizmodo http://gizmodo.com

June 4, 2018 at 02:09PM