NVIDIA’s DLSS 3.5 makes ray traced games look better with AI

https://www.engadget.com/nvidias-dlss-35-makes-ray-traced-games-look-better-with-ai-130012143.html?src=rss

Last year, NVIDIA unveiled DLSS 3 with frame interpolation, which used its AI-driven rendering accelerator to add extra frames to games. Now at Gamescom it’s introducing DLSS 3.5, which adds Ray Reconstruction, a new feature that will use the company’s neural network to improve the quality of ray traced images. It’ll be available for all RTX GPUs—unlike DLSS 3’s frame interpolation, which only works with RTX 40-series cards.

NVIDIA says Ray Reconstruction will replace "hand-tuned denoisers with an NVIDIA supercomputer-trained AI network that generates higher-quality pixels in between sampled rays." That’s similar to NVIDIA’s original pitch for DLSS — making low-res textures look better thanks to AI — and it could potentially lead to better ray tracing performance as well. In images shown to media, Ray Reconstruction appears to deliver sharper reflections and textures in supported titles. (See comparisons below.)

According to the company, Cyberpunk 2077 in Overdrive Mode (its most powerful ray tracing offering) hit 108 fps with DLSS 3.5 and Ray Reconstruction, while the same system reached 100fps with DLSS 3 alone, 63fps with DLSS 2 (which lacks Frame Generation) and 20fps without any DLSS help.

Just like previous DLSS releases, developers will have to manually implement support for Ray Reconstruction. Cyberpunk 2077 (and its expansion Phantom Liberty) will be the first DLSS 3.5 title in September, followed by Portal RTX and Alan Wake 2. NVIDIA will be showing off Ray Reconstruction at Gamescom this week, and hopefully we’ll get a look ourselves sometime soon.

This article originally appeared on Engadget at https://ift.tt/HAIP8K3

via Engadget http://www.engadget.com

August 22, 2023 at 08:06AM

The Download: reusing heat from computers, and period research

https://www.technologyreview.com/2023/08/21/1078143/the-download-reusing-heat-period-research/

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This startup has engineered a clever way to reuse waste heat from cloud computing

The idea of using the wasted heat of computing to do something else has been mooted plenty of times before. Now, UK startup Heata is actually doing it. When you sign up, it places a server in your home, where it connects via your Wi-Fi network to similar servers in other homes—all of which process data from companies that pay it for cloud computing services. 

Each server prevents one ton of carbon dioxide equivalent per year from being emitted and saves homeowners an average of £250 on hot water annually, a considerable discount in a country where many inhabitants struggle to afford heat.

The clever thing is that it provides a way to use electricity twice—providing services to the rapidly growing cloud computing industry and also providing domestic hot water—at a time when energy efficiency matters more than ever. Read the full story.

—Luigi Avantaggiato

Tiny faux organs could crack the mystery of menstruation

A group of scientists are using new tools akin to miniature organs to study a poorly understood—and frequently problematic—part of human physiology: menstruation. 

Heavy, sometimes debilitating periods strike at least a third of people who menstruate at some point in their lives, causing some to regularly miss work or school. Anemia threatens about two-thirds of people with heavy periods. Many people desperately need treatments to make their period more manageable, but it’s difficult for scientists to design medications without understanding how menstruation really works.

That understanding could be in the works, thanks to endometrial organoids—biomedical tools made from bits of the tissue that lines the uterus. The research is still very much in its infancy. But organoids have already provided insights into why menstruation is routine for some people and fraught for others. Some researchers are hopeful that these early results mark the dawn of a new era. Read the full story

—Saima Sidik

Both of the stories featured today are from the new ethics-themed print magazine issue of MIT Technology Review, set to go live on Wednesday. Subscribe to read it, if you don’t already!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Canadian leaders are calling on Meta to reverse its news ban
They say the block has been preventing people from getting access to crucial information about wildfires. (WP $)
+ 850 people are still missing after the Maui wildfires, its mayor has said. (NBC)
Lahaina’s governor says the state ‘tipped too far’ in trying to preserve water. (NYT $)

2 Stars are inking deals to license their AI doubles ?
It creates new ways to make money—but also a hefty dose of anxiety for the future. (The Information $)
People are hiring out their faces to become deepfake-style marketing clones. (MIT Technology Review)
+ Despite early excitement, a lot of companies are struggling to meaningfully deploy AI. (Axios)
 + Most Americans want AI development to go more slowly. (Vox)

3 Russia’s bid to return to the moon failed
Its Luna 25 spacecraft slammed into the moon’s surface yesterday. (The Economist $)

4 Cruise has to halve its robotaxi fleet after two crashes in San Francisco
Just over a week after it gained approval to operate at all hours in the city. (Quartz)
Lidar on a chip will be crucial to the future of fully autonomous driving. (IEEE Spectrum)

5 Why some ships are getting back their sails
Shipping accounts for 2.1% of global CO2 emissions—using wind instead of fuel could help to cut that. (BBC)
How ammonia could help clean up global shipping. (MIT Technology Review)

6 Musk says X will no longer have a block function
Though it will remain for direct messages. (CNBC)
+ A glitch broke links from before 2014 on X. (The Verge)
+ Musk’s antics are starting to wear thin among some of his fans. (WSJ $)
+ Tesla is suing two former employees for allegedly leaking data. (Quartz $)

7 Here’s the trouble with getting your news from influencers
If you’re relying on a single creator, what happens when they’re wrong? (The Verge)

8 Can video games help people with ADHD?
As stimulant shortages drag on, people are starting to seek out help wherever they can. (Wired $)
We may never fully know how video games affect our wellbeing. (MIT Technology Review)

9 Haptic suits let you feel music through your skin
Groovy! (NYT $)

10 How Apple won US teens over
A recent survey found 87% have iPhones, and they’re unlikely to switch. (WSJ $)
Switching on subtitles is all the rage too. (Axios)

Quote of the day

“I used to think, ‘I’m concerned for my children and grandchildren.’ Now it’s to the point where I’m concerned about myself.”

—Mike Flannigan, a professor of wildland fire at Thompson Rivers University in Kamloops, Canada, tells the LA Times how he feels about scientists’ most dire climate predictions coming true.

The big story

This fuel plant will use agricultural waste to combat climate change

Photograph of orchard

MOTE

February 2022

A startup called Mote plans to build a new type of fuel-producing plant in California’s fertile Central Valley that would, if it works as hoped, continually capture and bury carbon dioxide, starting from 2024. 

It’s among a growing number of efforts to commercialize a concept first proposed two decades ago as a means of combating climate change, known as bioenergy with carbon capture and sequestration, or BECCS.

It’s an ambitious plan. However, there are serious challenges to doing BECCS affordably and in ways that reliably suck down significant levels of carbon dioxide. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Amused by a 2001 BBC news report that refers to camera phones as a “gimmick.”
+ It won’t be to everyone’s taste, but this drink sounds delicious to me. 
+ Fan of Dave Grohl? I thoroughly recommend reading his autobiography
+ Today I discovered you can deter seagulls from stealing your food by staring them down.

via Technology Review Feed – Tech Review Top Stories https://ift.tt/H06r5ol

August 21, 2023 at 07:14AM

Why we should all be rooting for boring AI

https://www.technologyreview.com/2023/08/22/1078230/why-we-should-all-be-rooting-for-boring-ai/

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m back from a wholesome week off picking blueberries in a forest. So this story we published last week about the messy ethics of AI in warfare is just the antidote, bringing my blood pressure right back up again. 

Arthur Holland Michel does a great job looking at the complicated and nuanced ethical questions around warfare and the military’s increasing use of artificial-intelligence tools. There are myriad ways AI could fail catastrophically or be abused in conflict situations, and there don’t seem to be any real rules constraining it yet. Holland Michel’s story illustrates how little there is to hold people accountable when things go wrong.  

Last year I wrote about how the war in Ukraine kick-started a new boom in business for defense AI startups. The latest hype cycle has only added to that, as companies—and now the military too—race to embed generative AI in products and services. 

Earlier this month, the US Department of Defense announced it is setting up a Generative AI Task Force, aimed at “analyzing and integrating” AI tools such as large language models across the department. 

The department sees tons of potential to “improve intelligence, operational planning, and administrative and business processes.” 

But Holland Michel’s story highlights why the first two use cases might be a bad idea. Generative AI tools, such as language models, are glitchy and unpredictable, and they make things up. They also have massive security vulnerabilitiesprivacy problems, and deeply ingrained biases.  

Applying these technologies in high-stakes settings could lead to deadly accidents where it’s unclear who or what should be held responsible, or even why the problem occurred. Everyone agrees that humans should make the final call, but that is made harder by technology that acts unpredictably, especially in fast-moving conflict situations. 

Some worry that the people lowest on the hierarchy will pay the highest price when things go wrong: “In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the ‘decision’ will absorb the blame and protect everyone else along the chain of command from the full impact of accountability,” Holland Michel writes. 

The only ones who seem likely to face no consequences when AI fails in war are the companies supplying the technology.

It helps companies when the rules the US has set to govern AI in warfare are mere recommendations, not laws. That makes it really hard to hold anyone accountable. Even the AI Act, the EU’s sweeping upcoming regulation for high-risk AI systems, exempts military uses, which arguably are the highest-risk applications of them all. 

While everyone is looking for exciting new uses for generative AI, I personally can’t wait for it to become boring. 

Amid early signs that people are starting to lose interest in the technology, companies might find that these sorts of tools are better suited for mundane, low-risk applications than solving humanity’s biggest problems.

Applying AI in, for example, productivity software such as Excel, email, or word processing might not be the sexiest idea, but compared to warfare it’s a relatively low-stakes application, and simple enough to have the potential to actually work as advertised. It could help us do the tedious bits of our jobs faster and better.

Boring AI is unlikely to break as easily and, most important, won’t kill anyone. Hopefully, soon we’ll forget we’re interacting with AI at all. (It wasn’t that long ago when machine translation was an exciting new thing in AI. Now most people don’t even think about its role in powering Google Translate.) 

That’s why I’m more confident that organizations like the DoD will find success applying generative AI in administrative and business processes. 

Boring AI is not morally complex. It’s not magic. But it works. 

Deeper Learning

AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Amid all the chatter about ChatGPT, artificial general intelligence, and the prospect of robots taking people’s jobs, regulators in the EU and the US have been ramping up warnings against AI and emotion recognition. Emotion recognition is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. 

But why is this a top concern? Western regulators are particularly concerned about China’s use of the technology, and its potential to enable social control. And there’s also evidence that it simply does not work properly. Tate Ryan-Mosley dissected the thorny questions around the technology in last week’s edition of The Technocrat, our weekly newsletter on tech policy.

Bits and Bytes

Meta is preparing to launch free code-generating software
A version of its new LLaMA 2 language model that is able to generate programming code will pose a stiff challenge to similar proprietary code-generating programs from rivals such as OpenAI, Microsoft, and Google. The open-source program is called Code Llama, and its launch is imminent, according to The Information. (The Information

OpenAI is testing GPT-4 for content moderation
Using the language model to moderate online content could really help alleviate the mental toll content moderation takes on humans. OpenAI says it’s seen some promising first results, although the tech does not outperform highly trained humans. A lot of big, open questions remain, such as whether the tool can be attuned to different cultures and pick up context and nuance. (OpenAI)

Google is working on an AI assistant that offers life advice
The generative AI tools could function as a life coach, offering up ideas, planning instructions, and tutoring tips. (The New York Times)

Two tech luminaries have quit their jobs to build AI systems inspired by bees
Sakana, a new AI research lab, draws inspiration from the animal kingdom. Founded by two prominent industry researchers and former Googlers, the company plans to make multiple smaller AI models that work together, the idea being that a “swarm” of programs could be as powerful as a single large AI model. (Bloomberg)

via Technology Review Feed – Tech Review Top Stories https://ift.tt/H06r5ol

August 22, 2023 at 04:48AM