Machines can spot mental health issues—if you hand over your personal data

https://www.technologyreview.com/2020/08/13/1006573/digital-psychiatry-phenotyping-schizophrenia-bipolar-privacy/

When Neguine Rezaii first moved to the United States a decade ago, she hesitated to tell people she was Iranian. Instead, she would use Persian. “I figured that people probably wouldn’t know what that was,” she says. 

The linguistic ambiguity was useful: she could conceal her embarrassment at the regime of Mahmoud Ahmadinejad while still being true to herself. “They just used to smile and go away,” she says. These days she’s happy to say Iranian again. 

We don’t all choose to use language as consciously as Rezaii did—but the words we use matter. Poets, detectives, and lawyers have long sifted through people’s language for clues to look for their motives and inner truths. Psychiatrists, too: perhaps psychiatrists especially. After all, while medicine now has a battery of tests and technical tools for diagnosing physical ailments, the chief tool of psychiatry is the same one employed centuries ago: the question “So how do you feel today?” Simple to ask, maybe—but not to answer.  

“In psychiatry we don’t even have a stethoscope,” says Rezaii, who is now a neuropsychiatry fellow at Massachusetts General Hospital. “It’s 45 minutes of talking with a patient and then making a diagnosis on the basis of that conversation. There are no objective measures. No numbers.” 

There’s no blood test to diagnose depression, no brain scan that can pinpoint anxiety before it happens. Suicidal thoughts cannot be diagnosed by a biopsy, and even if psychiatrists are deeply concerned that the covid-19 pandemic will have severe impacts on mental health, they have no easy way to track that. In the language of medicine, there is not a single reliable biomarker that can be used to help diagnose any psychiatric condition. The search for shortcuts to finding corruption of thought keeps coming up empty—keeping much of psychiatry in the past and blocking the road to progress. It makes diagnosis a slow, difficult, subjective process and stops researchers from understanding the true nature and causes of the spectrum of mental maladies or developing better treatments.

But what if there were other ways? What if we didn’t just listen to words but measure them? Could that help psychiatrists follow the verbal clues that could lead back to our state of mind?

“That is basically what we’re after,” Rezaii says. “Finding some behavioral features that we can assign some numbers to. To be able to track them in a reliable manner and to use them for potential detection or diagnosis of mental disorders.”

In June 2019, Rezaii published a paper about a radical new approach that did exactly that. Her research showed that the way we speak and write can reveal early indications of psychosis, and that computers can help us spot those signs with unnerving accuracy. She followed the breadcrumbs of language to see where they led. 

Rezaii found that language analysis could predict with more than 90% accuracy which patients were likely to develop schizophrenia before any typical symptoms emerged.

People who are prone to hearing voices, it turns out, tend to talk about them. They don’t mention these auditory hallucinations explicitly, but they do use associated words—“sound,” “hear,” “chant,” “loud”—more often in regular conversation. The pattern is so subtle you wouldn’t be able to spot the spikes with the naked ear. But a computer can find them. And in tests with dozens of psychiatric patients, Rezaii found that language analysis could predict which of them were likely to develop schizophrenia with more than 90% accuracy, before any typical symptoms emerged. It promised a huge leap forward.

In the past, capturing information about somebody or analyzing a person’s statements to make a diagnosis relied on the skill, experience, and opinions of individual psychiatrists. But thanks to the omnipresence of smartphones and social media, people’s language has never been so easy to record, digitize, and analyze. And a growing number of researchers are sifting through the data we produce—from our choice of language or our sleep patterns to how often we call our friends and what we write on Twitter and Facebook—to look for signs of depression, anxiety, bipolar disorder, and other syndromes. 

To Rezaii and others, the ability to collect this data and analyze it is the next great advance in psychiatry. They call it “digital phenotyping.”

Weighing your words

In 1908, the Swiss psychiatrist Eugen Bleuler announced the name for a condition that he and his peers were studying: schizophrenia. He noted how the condition’s symptoms “find their expression in language” but added, “The abnormality lies not in language itself but what it has to say.”

Bleuler was among the first to focus on what are called the “negative” symptoms of schizophrenia, the absence of something seen in healthy people. These are less noticeable than the so-called positive symptoms, which indicate the presence of something extra, such as hallucinations. One of the most common negative symptoms is alogia, or speech poverty. Patients either speak less or say less when they speak, using vague, repetitive, stereotypical phrases. The result is what psychiatrists call low semantic density.

Low semantic density is a telltale sign that a patient might be at risk of psychosis. Schizophrenia, a common form of psychosis, tends to develop  in the late teens to early 20s for men and the late 20s to early 30s for women—but a preliminary stage with milder symptoms usually precedes the full-blown condition. A lot of research is carried out on people in this “prodromal” phase, and psychiatrists like Rezaii are using language and other measures of behavior to try to identify which prodromal patients go on to develop full schizophrenia and why. Building on other research projects suggesting, for example, that people at high risk of psychosis tend use fewer possessive pronouns like “my,” “his,” or “ours,” Rezaii and her colleagues wanted to see if a computer could spot low semantic density.

Neguine Razai

JAKE BELCHER

The researchers used recordings of conversations made over the last decade or so with two groups of schizophrenia patients at Emory University. They broke each spoken sentence down into a series of core ideas so that a computer could measure the semantic density. The sentence “Well, I think I do have strong feelings about politics” gets a high score, thanks to the words “strong,” “politics,” and “feelings.”

But a sentence like “Now, now I know how to be cool with people because it’s like not talking is like, is like, you know how to be cool with people it’s like now I know how to do that” has a very low semantic density. 

In a second test, they got the computer to count the number of times each patient used words associated with sound—looking for the clues about voices that they might be hearing but keeping secret. In both cases, the researchers gave the computer a baseline of “normal” speech by feeding it online conversations posted by 30,000 users of Reddit.

When psychiatrists meet people in the prodromal phase, they use a standard set of interviews and cognitive tests to predict which will go on to develop psychosis. They usually get it right 80% of the time. By combining the two analyses of speech patterns, Rezaii’s computer scored at least 90%.

She says there’s a long way to go before the discovery could be used in the clinic to help predict what will happen to patients. The study looked at the speech of just 40 people; the next step would be to increase the sample size. But she’s already working on software that could quickly analyze the conversations she has with patients. “So you hit the button and it gives you numbers. What is the semantic density of the speech of the patient? What were the subtle features that the patient talked about but did not necessarily express in an explicit way?” she says. “If it’s a way to get into the deeper, more subconscious layers, that would be very cool.” 

The results also have an obvious implication: If a computer can reliably detect such subtle changes, why not continuously monitor those at risk? 

More than just schizophrenia

Around one in four people across the world will suffer from a psychiatric syndrome during their lifetime. Two in four now own a smartphone. Using the gadgets to capture and analyze speech and text patterns could act as an early warning system. That would give doctors time to intervene with those at highest risk, perhaps to watch them more closely—or even to try therapies to reduce the chance of a psychotic event.

Patients could also use technology to monitor their own symptoms. Mental-health patients are often unreliable narrators when it comes to their health—unable or unwilling to identify their symptoms. Even digital monitoring of basic measurements like the number of hours of sleep somebody is getting can help, says Kit Huckvale, a postdoctoral fellow who works on digital health at the Black Dog Institute in Sydney, because it can warn patients when they might be most vulnerable to a downturn in their condition.

It’s not just schizophrenia that could be spotted with a machine. By studying people’s phones, psychiatrists have been able to pick up the subtle signs that precede a bipolar episode.

“Using these computers that we all carry around with us, maybe we do have access to information about changes in behavior, cognition, or experience that provide robust signals about future mental illness,” he says. “Or indeed, just the earliest stages of distress.”

And it’s not just schizophrenia that could be spotted with a machine. Probably the most advanced use of digital phenotyping is to predict the behaviors of people with bipolar disorder. By studying people’s phones, psychiatrists have been able to pick up the subtle signs that precede an episode. When a downswing in mood is coming, the GPS sensors in bipolar patients’ phones show that they tend to be less active. They answer incoming calls less, make fewer outgoing calls, and generally spend more time looking at the screen. In contrast, before a manic phase they move around more, send more text messages, and spend longer talking on the phone. 

Starting in March 2017, hundreds of patients discharged from psychiatric hospitals around Copenhagen have been loaned customized phones so doctors can remotely watch their activity and check for signs of low mood or mania. If the researchers spot unusual or worrying patterns, the patients are invited to speak with a nurse. By watching for and reacting to early warning signs in this way, the study aims to reduce the number of patients who experience a serious relapse.

Such projects seek consent from participants and promise to keep the data confidential. But as details on mental health get sucked into the world of big data, experts have raised concerns about privacy.

“The uptake of this technology is definitely outpacing legal regulation. It’s even outpacing public debate,” says Piers Gooding, who studies mental-health law and policies at the Melbourne Social Equity Institute in Australia. “There needs to be a serious public debate about the use of digital technologies in the mental-health context.”

Already, scientists have used videos posted by families to YouTube—without seeking explicit consent—to train computers to find distinctive body movements of children with autism. Others have sifted Twitter posts to help track behaviors associated with the transmission of HIV, while insurance companies in New York are officially allowed to study people’s Instagram feeds before calculating their life insurance premiums.

As technology tracks and analyzes our behaviors and lifestyles with ever more precision—sometimes with our knowledge and sometimes without—the opportunities for others to remotely monitor our mental state is growing fast. 

Privacy protections

In theory, privacy laws should prevent mental-health data from being passed around. In the US, the 24-year-old HIPAA statute regulates the sharing of medical data, and Europe’s data protection act, the GDPR, should theoretically stop it too. But a 2019 report from surveillance watchdog Privacy International found that popular websites about depression in France, Germany, and the UK shared user data with advertisers, data brokers, and large tech companies, while some websites offering depression tests leaked answers and test results to third parties.

Gooding points out that for several years Canadian police would pass details on people who attempted suicide to US border officials, who would then refuse them entry. In 2017, an investigation concluded that the practice was illegal, and it was stopped. 

Few would dispute that this was an invasion of privacy. Medical information is, after all, meant to be sacrosanct. Even when diagnoses of mental illness are made, laws around the world are supposed to prevent discrimination in the workplace and elsewhere. 

But some ethicists worry that digital phenotyping blurs the lines on what could or should be classed, regulated, and protected as medical data. 

If the minutiae of our daily lives is sifted for clues to our mental health, then our “digital exhaust”—data on which words we choose, how quickly we respond to texts and calls, how often we swipe left, which posts we choose to like—could tell others at least as much about our state of mind as what’s in our confidential medical records. And it’s almost impossible to hide.

“The technology has pushed us beyond the traditional paradigms that were meant to protect certain types of information,” says Nicole Martinez-Martin, a bioethicist at Stanford. “When all data are potentially health data then there’s a lot of questions about whether that sort of health-information exceptionalism even makes sense anymore.”

Health-care information, she adds, used to be simple to classify—and therefore protect—because it was produced by health-care providers and held within health-care institutions, each of which had its own regulations to safeguard the needs and rights of its patients. Now, many ways of tracking and monitoring mental health using signals from our everyday actions are being developed by commercial firms, which don’t.

Facebook, for example, claims to use AI algorithms to find people at risk of suicide, by screening language in posts and concerned comments from friends and family. The company says it has alerted authorities to help people in at least 3,500 cases. But independent researchers complain it has not revealed how its system works or what it does with the data it gathers.

“Although suicide prevention efforts are vitally important, this is not the answer,” says Gooding. “There is zero research as to the accuracy, scale, or effectiveness of the initiative, nor information on what precisely the company does with the information following each apparent crisis. It’s basically hidden behind a curtain of trade secrecy laws.” 

The problems are not just in the private sector. Although researchers working in universities and research institutes are subject to a web of permissions to ensure consent, privacy, and ethical approval, some academic practices could actually encourage and enable the misuse of digital phenotyping, Rezaii points out.

“When I published my paper on predicting schizophrenia, the publishers wanted the code to be openly accessible, and I said fine because I was into liberal and free stuff. But then what if someone uses that to build an app and predict things on weird teenagers? That’s risky,” she says. “Journals have been advocating free publication of the algorithms. It has been downloaded 1,060 times so far. I do not know for what purpose, and that makes me uncomfortable.” 

Beyond privacy concerns, some worry that digital phenotyping is simply overhyped.

Serife Tekin, who studies the philosophy of psychiatry at the University of Texas at San Antonio, says psychiatrists have a long history of jumping on the latest technology as a way to try to make their diagnoses and treatments seem more evidence-based. From lobotomies to the colorful promise of brain scans, the field tends to move with huge surges of uncritical optimism that later proves to be unfounded, she says—and digital phenotyping could be simply the latest example. 

“Contemporary psychiatry is in crisis,” she says. “But whether the solution to the crisis in mental-health research is digital phenotyping is questionable. When we keep putting all of our eggs in one basket, that’s not really engaging with the complexity of the problem.”

Making mental health more modern?

Neguine Rezaii knows that she and others working on digital phenotyping are sometimes blinded by the bright potential of the technology. “There are things I haven’t thought about because we’re so excited about getting as much data as possible about this hidden signal in language,” she says.

But she also knows that psychiatry has relied for too long on little more than informed guesswork. “We don’t want to make some questionable inferences about what the patient might have said or meant if there is a way to objectively find out,” she says. “We want to record them, hit a button, and get some numbers. At the end of the appointment, we have the results. That’s the ideal. That’s what we’re working on.” 

To Rezaii, it’s natural that modern psychiatrists should want to use smartphones and other available technology. Discussions about ethics and privacy are important, she says, but so is an awareness that tech firms already harvest information on our behavior and use it—without our consent—for less noble purposes, such as deciding who will pay more for identical taxi rides or wait longer to be picked up. 

“We live in a digital world. Things can always be abused,” she says. “Once an algorithm is out there, then people can take it and use it on others. There’s no way to prevent that. At least in the medical world we ask for consent.”

via Technology Review Feed – Tech Review Top Stories https://ift.tt/1XdUwhl

August 13, 2020 at 05:29AM

A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.

https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/

At the start of the week, Liam Porr had only heard of GPT-3. By the end, the college student had used the AI model to produce an entirely fake blog under a fake name.

It was meant as a fun experiment. But then one of his posts found its way to the number-one spot on Hacker News. Few people noticed that his blog was completely AI-generated. Some even hit “Subscribe.”

While many have speculated about how GPT-3, the most powerful language-generating AI tool to date, could affect content production, this is one of the only known cases to illustrate the potential. What stood out most about the experience, says Porr, who studies computer science at the University of California, Berkeley: “It was super easy, actually, which was the scary part.”

GPT-3 is OpenAI’s latest and largest language AI model, which the San Francisco–based research lab  began drip-feeding out in mid-July. In February of last year, OpenAI made headlines with GPT-2, an earlier version of the algorithm, which it announced it would withhold for fear it would be abused. The decision immediately sparked a backlash, as researchers accused the lab of pulling a stunt. By November, the lab had reversed position and released the model, saying it had detected “no strong evidence of misuse so far.”

The lab took a different approach with GPT-3; it neither withheld it nor granted public access. Instead, it gave the algorithm to select researchers who applied for a private beta, with the goal of gathering their feedback and commercializing the technology by the end of the year.

Porr submitted an application. He filled out a form with a simple questionnaire about his intended use. But he also didn’t wait around. After reaching out to several members of the Berkeley AI community, he quickly found a PhD student who already had access. Once the graduate student agreed to collaborate, Porr wrote a small script for him to run. It gave GPT-3 the headline and introduction for a blog post and had it spit out several completed versions. Porr’s first post (the one that charted on Hacker News), and every post after, was a direct copy-and-paste from one of outputs.

“From the time that I thought of the idea and got in contact with the PhD student to me actually creating the blog and the first blog going viral—it took maybe a couple of hours,” he says.

A screenshot of one of Liam Porr's fake blog posts at #1 on Hacker News.
Porr’s fake blog post, written under the fake name “adolos,” reaches #1 on Hacker News.
SCREENSHOT / LIAM PORR

The trick to generating content without the need for editing was understanding GPT-3’s strengths and weaknesses. “It’s quite good at making pretty language, and it’s not very good at being logical and rational,” says Porr. So he picked a popular blog category that doesn’t require rigorous logic: productivity and self-help.

From there, he wrote his headlines following a simple formula: he’d scroll around on Medium and Hacker News to see what was performing in those categories and put together something relatively similar. “Feeling unproductive? Maybe you should stop overthinking,” he wrote for one. “Boldness and creativity trumps intelligence,” he wrote for another. On a few occasions, the headlines didn’t work out. But as long as he stayed on the right topics, the process was easy.

After two weeks of nearly daily posts, he retired the project with one final, cryptic, self-written message. Titled “What I would do with GPT-3 if I had no ethics,” it described his process as a hypothetical. The same day, he also posted a more straightforward confession on his real blog.

A screenshot of someone on Hacker News accusing the Porr's blog post of being written by GPT-3. Another user responds that the comment "isn't acceptable."
The few people who grew suspicious of Porr’s fake blog were downvoted by other members in the community.
SCREENSHOT / LIAM PORR

Porr says he wanted to prove that GPT-3 could be passed off as a human writer. Indeed, despite the algorithm’s somewhat weird writing pattern and occasional errors, only three or four of the dozens of people who commented on his top post on Hacker News raised suspicions that it might have been generated by an algorithm. All those comments were immediately downvoted by other community members.

For experts, this has long been the worry raised by such language-generating algorithms. Ever since OpenAI first announced GPT-2, people have speculated that it was vulnerable to abuse. In its own blog post, the lab focused on the AI tool’s potential to be weaponized as a mass producer of misinformation. Others have wondered whether it could be used to churn out spam posts full of relevant keywords to game Google.

Porr says his experiment also shows a more mundane but still troubling alternative: people could use the tool to generate a lot of clickbait content. “It’s possible that there’s gonna just be a flood of mediocre blog content because now the barrier to entry is so easy,” he says. “I think the value of online content is going to be reduced a lot.”

Porr plans to do more experiments with GPT-3. But he’s still waiting to get access from OpenAI. “It’s possible that they’re upset that I did this,” he says. “I mean, it’s a little silly.”

via Technology Review Feed – Tech Review Top Stories https://ift.tt/1XdUwhl

August 14, 2020 at 04:32AM

Intel Xe HPG Discrete Gaming GPU Coming 2021

https://www.legitreviews.com/intel-xe-hpg-discrete-gaming-gpu-coming-2021_221214


The Intel Xe GPU was announced by Intel back in 2018 and we have all be waiting patiently to see just how powerful Xe graphics is going to be. Intel will be using Xe in a wide range of applications from integrated solutions all the way up to dedicated cards that will be deployed in HPC/AI environments. Intel has long said that a dedicated card would be launched in the middle of 2020, but they never really said what that product was going to actually be. Then at CES 2020, Legit Reviews was able to actually see the first discrete development board codenamed DG1 in action. Intel was touting it as a software development vehicle that would help developers try out the brand new instruction set that Intel Xe runs.

Intel Xe HPG GPU For Gamers

During the Intel Architeture Day 2020 briefings, Raja Koduri addressed enthusiast gamers with regards to where they fit into the mix with Intel Xe graphics. The gaming side of the Xe GPU powerhouse will be delivered under the Xe HPG series. Intel Xe HPG solutions will have dedicated hardware ray-tracing support and be produced by a non-Intel fab for the time being. This means that Intel could possibly be using TSMC to produce the wafers for Xe HPG GPUs, which is ironically the same fab that creates AMD Radeon and NVIDIA GeForce GPUs.

Intel Xe HPG Has Raytracing

AMD and NVIDIA have some time to get ready though as Raja Koduri said that Intel Xe HPG dedicated cards will ship sometime in 2021. Intel also failed to clarify if discrete solutions would be available for both desktop and mobile form factors at launch.

Intel Xe Execution Units

Intel’s Xe graphics will have up to 48 texels (similar to an image pixel), 96 EUs (execution units), 1536 flops (floating-point calculations), and up to 16MB of L3 cache, as well as twice the amount of memory bandwidth. The GPU engine is about 1.5x larger than Intel’s Gen 11 graphics, and offers up to 1,536 FLOPS, up to 48 texels, and up to 24 pixels per clock, through three pipelines. The vector lanes are up from 8 to 16, and it features a more efficient thread controller, along with improved color and depth compression algorithms with end-to-end compression, to optimize bandwidth usage.

Intel Gen 11 Execution Units

The current Intel UHD Graphics 630 solution is based on the Gen 11 architecture and has 64 EUs offering 1,024 FLOPS, so you have 50% more execution units and those have drastically changed.

Intel has designed the Xe microarchitecture from scratch to ensure you’ll be able to play more game titles on an Intel GPU than ever before.

Intel Xe Running BF1

Intel showed off a 15W implementation of Xe LP (Low P0wer) technology running several major games during the briefing. Game titles included Doom Eternal, Grid and Battlefield 1. Video clips of actual gameplay footage was run showing how much better the Intel Xe graphics was versus current 25W Gen11 graphics solution. The Intel Xe LP integrated graphics solution was able to run the games at a higher frame rate to deliver a smoother gaming experience.

Intel Xe GPU Trasnscoding Performance

Intel also did a bunch of work on the media and display engines on Intel Xe. The Intel Xe LP media engine retains a similar architecture to that in Gen11, but both the encode and decode performance is up by nearly two times in some cases, and across multiple formats; 444, 422, 420.

Intel Xe Media Engine

Intel is now offering AV1 decoder and HEVC screen content coding support. There is also 4K and 8K 60Hz support along with HDR 10,  Dolby Vision, and 12-bit BT2020 color support. Intel even added support for 360Hz refresh rates and Adaptive Sync!

Intel DG1 Discrete Graphics Card System

We can’t wait to see how Intel Xe HPG discrete graphics perform in 2021, but thankfully we’ll soon have a chance to try out Intel Xe LP graphics in Intel’s upcoming 10nm ‘Tiger Lake’ processors. Intel Tiger Lake mobile processors will be coming to market later this year and will be the first to contain Xe LP graphics.

via Legit Reviews Hardware Articles https://ift.tt/2Y6Fy3O

August 13, 2020 at 11:11AM

A smartphone case that can crawl to a wireless charger

https://geekologie.com/2020/08/a-smartphone-case-that-can-crawl-to-a-wi.php

crawling-smartphone-case.jpg
Researchers from the Seoul National University Biorobotics Laboratory have developed a lightweight and low-profile crawling phone case robot. The legs retract flat when not in use, keeping the form factor as small as possible. According to IEEE Spectrum:

To move the robot forward, a linkage (attached to a motor through a gearbox) pushes the leg back against the ground, as the knee joint keeps the leg straight. On the return stroke, the joint allows the leg to fold, making it compliant so that it doesn’t exert force on the ground. The transmission that sends power from the gearbox to the legs is just 1.5-millimeter thick, but this incredibly thin and lightweight mechanical structure is quite powerful. A non-phone case version of the robot, weighing about 23 g, is able to crawl at 21 centimeters per second while carrying a payload of just over 300 g. That’s more than 13 times its body weight.

Okay, so it can’t technically choose where to crawl, it can sort of just vibrate itself in a relative forward motion. However, it’s not hard to imagine this thing with enough sensors to actually make it functional enough to crawl to a charging pad or, better yet, into your hand. Sure, right now it looks like a disgusting vibrating phone monster, but imagine if that disgusting vibrating phone monster could also think. No matter where you go, you look over your shoulder and there’s your phone. Sitting. Waiting. Plotting. Keep going for video of the case in action.

via Geekologie – Gadgets, Gizmos, and Awesome https://geekologie.com/

August 13, 2020 at 08:17AM

Snapchat’s latest custom Lenses are designed for dancing videos

https://www.engadget.com/snapchat-full-body-tracking-lens-studio-update-115018395.html

Snap has updated its Lens Studio platform so artists and developers can create custom Lenses — the company’s term for AR experiences — that leverage full body tracking. Snapchat’s maker has created two templates, Full Body Triggers and Full Body Attachments, that can conjure up various effects based on what the user is doing inside the frame. As a tutorial video explains, these include toggling virtual objects, playing short pieces of animation and particle bursts. Before, developers could use a Skeletal template to track eight points on the upper body. The new templates, meanwhile, can monitor 18 points including the user’s knees and ankles.

There’s an obvious application for these new developer tools: dance videos. The genre has always been popular across various social platforms including YouTube and Instagram. TikTok’s monumental rise, however, and the ongoing coronavirus pandemic — which has forced many to stay indoors and find new ways to entertain themselves — has encouraged people to create even more body-grooving clips. It’s no surprise, therefore, that Snapchat wants to support the trend with new artist and developer tools. If you don’t want to download and learn Lens Studio, fear not: Snapchat has already released four creator-made Lenses — Star Burst, Be You, Alone and Be Happy — that you can try right now in the app.

The Lens Studio update follows a long period of slow but steady growth for the company. Snapchat had 238 million daily active users last quarter, up 35 million year-over-year and nine million higher than the previous quarter. The company has quietly improved Spectacles, launched an app platform called Snap Minis and responded to rivals like TikTok by brokering deals with music labels and allowing users to share their Stories on other platforms. It hasn’t all been smooth sailing, though. The company released yet another insensitive filter — this time telling people to "smile and break the chains" — for Juneteenth a couple of months back. Following user backlash, the overlay was pulled and Snapchat issued an apology.

via Engadget http://www.engadget.com

August 14, 2020 at 07:00AM