Intel’s AI Playground is the perfect ‘everything’ app for newbies—with a catch

https://www.pcworld.com/article/2636980/intels-ai-playground-is-the-perfect-everything-app-for-newbies-with-a-catch.html

If you try out Intel’s AI Playground, which incorporates everything from AI art to an LLM chatbot to even text-to-video in a single app, you might think: Wow! OK! An all-in-one local AI app that does everything is worth trying out! And it is… except that it’s made for just a small slice of Intel’s own products.

Quite simply, no single AI app has emerged as the “Amazon” of AI, doing everything you’d want in a single service or site. You can use a tool like Adobe Photoshop or Firefly to perform sophisticated image generations and editing, but chatting is out. ChatGPT or Google Gemini can converse with you, even generating images, but to a limited extent.

Most of these services require you to hopscotch back and forth between sites, however, and can cost money for a subscription. Intel’s AI Playground merges all of these inside a single, well-organized app that runs locally (and entirely privately) on your PC and it’s all for free.

Should I let you in on the catch? I suppose I have to. AI Playground is a showcase for Intel’s Core Ultra processors, including its CPUs and GPUs–the Core Ultra 100 (Meteor Lake) and Core Ultra 200 (Lunar Lake) chips, specifically. But it could be so, so much better if everyone could use it.

Mark Hachman / Foundry

Yes, I realize that some users are quite suspicious of AI. (There are even AI-generated news stories!) Others, however, have found that certain tasks in their daily life such as business email can be handed off to ChatGPT. AI is a tool, even if it can be used in ways we disagree with.

What’s in AI Playground?

AI Playground has three main areas, all designated by tabs on the top of the screen:

  • Create: An AI image generator, which operates in either a default text-to-image mode, or in a “workflow” mode that uses a more sophisticated back end for higher-quality images
  • Enhance: Here, you can edit your images, either upscaling them or altering them through generative AI
  • Answer: A conventional AI chatbot, either as a standalone or with the ability to upload your own text documents

Each of those sections is what you might call self-sufficient, usable by itself. But in the upper right-hand corner is a settings or “gear” icon, which contains a terrific number of additional options, which are absolutely worth examining.

How to set up and install AI Playground

AI Playground’s strength is in its thoughtfulness, ease of use, and simplicity. If you’ve ever used a local AI application, you know that it can be rough. Some functions are content with just a command-line interface, which may require you to have a working knowledge of Python or GitHub. AI Playground was designed around the premise that it will take care of everything with just a single click. Documentation and explanations might be a little lacking in places, but AI Playground’s ease of use is unparalleled.

AI Playground can be downloaded from Intel’s AI Playground page. At press time, AI Playground was on version 2.2.1 beta.

Mark Hachman / Foundry

Note that the app and its back-end code require support for either a Core Ultra H (a “Meteor Lake” chip, the Core Ultra 200V) or either of the Intel Arc discrete GPUs, including the Alchemist and Battlemage parts. If you own a massive gaming laptop with a 14th-gen Intel Core chip or an Nvidia RTX 5090 GPU, you’re out of luck. Same with the Core Ultra 200H or “Arrow Lake.”

Since this is an “AI Playground,” you might think that the chip’s NPU would be used? Nope. All of these applications tap just the chip’s integrated GPU and I didn’t see the NPU being accessed once via Windows Task Manager.

Also, keep in mind that the GPU’s UMA frame buffer, the memory pool that’s shared between system memory and the integrated GPU, is what these AI models depend on. Intel’s integrated graphics shares half the available system memory with the system memory, as a unified memory architecture or UMA. Discrete GPUs have their own dedicated VRAM memory to pull from. The bottom line? You may not have enough video memory available to run every model.

Downloading the initial AI Playground application took about 680 megabytes on my machine. But that’s only the shell application. The models require an additional download, which will either be handled by the AI Installer application itself or may require you to click the “download” button itself.

The nice thing is that you don’t have to manage any of this. If AI Playground needs a model, it will tell you which one it requires and how much space on your hard drive it requires. None of the models I saw used more than 12GB of storage space and many much less. But if you want to try out a number of models, be prepared to download a couple dozen gigabytes or more.

Playing with AI Playground

I’ve called Fooocus the easiest way to generate AI art on your PC. For its time, it was! And it works with just about any GPU, too. But AI Playground may be even easier. The tab opens with just the space for a prompt and nothing else.

Like most AI art, the prompt defines the image and you can get really detailed. Here’s an example: “Award winning photo of a high speed purple sports car, hyper-realism, racing fast over wet track at night. The license plate number is “B580?, motion blur, expansive glowing cityscape, neon lights…”

Mark Hachman / Foundry

Enter a prompt and AI Playground will draw four small images, which appear in a vertical column to the left. Each image progresses in a series of steps with 20 as the default. After the image is completed, some small icons will appear next to it with additional options, including importing it into the “Enhance” tab.

The Settings gear is where you can begin tweaking your output. You can select from either “Standard” or “HD” resolution, which adjusts the “Image Size” field. You can adjust image size and resolution, and tweak the format. The “HD” option requires you to download a different model, as does the ‘Workflow” option to the upper right, which adds workflows based on ComfyUI. Essentially, they’re just better looking images with the option to guide the output with a reference image or other workflow.

Mark Hachman / Foundry

For now, the default model can be adjusted via the “Manual” tab, which opens up two additional options. You’ll see a “negative prompt,” which excludes things that you put in, and a “Safe Check” to turn off gore and other disturbing images. By default, “NSFW” (Not Safe for Work) is added to the negative prompt.

Both the Safe Check and NSFW negative prompt only appear as options in the Default image generator and seem to be on by default elsewhere. It’s up to you whether or not to remove them. The Default model (Lykon/dreamshaper-8) has apparently been trained on nudity and celebrities, though I stuck to public figures for testing purposes.

Note that all of your AI-generated art stays local to your PC, though Intel (obviously) warns you not to use a person’s likeness without their permission.

There’s also a jaw-droppingly obvious bug that I can’t believe Intel didn’t catch. Creating an HD image often begins its images with “UPLOAD” projected over the image, and sometimes renders the final image with it on, too. Why? Because there’s a field to add a reference image and UPLOAD is right in the middle of it. Somehow, AI Playground used the UPLOAD font as part of the image.

Mark Hachman / Foundry

Though my test machine was a Core Ultra 258V (Lunar Lake) with 32GB of RAM, an 896×576 image took 29 seconds to generate, with 25 rendering steps on the Default Mode. Using the Workflow (Line2-Image-HD-Quality) model at 1280×832 resolution and 20 steps, one image took two minutes 12 seconds to render. There’s also a Fast mode which should lower the rendering time, though I didn’t really like the output quality.

If you find an image you like, you can use the Enhance tab to upscale it. (Upscaling is being added to the Windows Photos app, which will eventually be made available to Copilot+ PCs using Intel Core Ultra 200 chips, too.) You can also use “inpainting,” which allows you to re-generate a portion of the screen, and “outpainting,” the technique which was used to “expand” the boundaries of the Mona Lisa painting, for example. You can also ask AI to tweak the image itself, though I had problems trying to generate a satisfactory result.

Mark Hachman / Foundry

The “Workflow” tab also hides some interesting utilities such as a “face swap” app and a way to “colorize” black-and-white photos. I was disappointed to see that a “text to video” model didn’t work, presumably because my PC was running on integrated graphics.

The “Answer” or chatbot portion of the AI Playground seems to be the weakest option. The default model, by Microsoft (Phi-3-mini-4K-instruct) refused to answer the dumb comic-book-nerd question, “Who would win in a fight, Wonder Woman or Iron Man?”

Mark Hachman / Foundry

It continued.

“What is the best car for an old man? Sorry, I can’t help with that.”

“What’s better, celery or potatoes? I’m sorry, I can’t assist with that. As an AI, I don’t have personal preferences.”

And so on. Switching to a different model which used the OpenVINO programming language, though, helped. There, the OpenVINO/Phi-3.5-mini-instruct-int4 model took 1.21 seconds to generate a response token, producing tokens to the tune of about 20 tokens per second. (A token isn’t quite the length of a word, but it’s a good rule of thumb.) I was also able to do some “vibe coding” — generating code via AI without the faintest clue what you’re doing. By default, the output is just a few hundred tokens, but that can be adjusted via a slider.

You can also just import your own model, too, by dropping a GGUF file (the file format for inference engines) into the appropriate folder.

Adapt AI Playground to AMD and Nvidia, please!

For all that, I really like AI Playground. Some people are notably (justifiably?) skeptical of AI, especially how AI can make mistakes and replace the authentic output of human artists. I’m not here to argue either side.

What Intel has done, however, is create a surprisingly good general-purpose and enthusiast application for exploring AI, that receives frequent updates and seems to be consistently improving.

The best thing about AI Playground? It’s open source, meaning that someone could probably come up with a fork that allows for more GPUs and CPUs to be implemented. From what I can see, it just hasn’t happened yet. If it did, it could be the single unified local AI app I’ve been waiting for.

via PCWorld https://www.pcworld.com

March 20, 2025 at 08:06AM

Researchers in Texas Figure Out a Non-Toxic Method of Making Fuel for Nuclear Fusion

https://gizmodo.com/researchers-in-texas-figure-out-a-non-toxic-method-of-making-fuel-for-nuclear-fusion-2000578533

The promise of fusion energy is cheap and abundant power for the entire planet. Scientists have made startling advances towards achieving it at scale, but there are still many problems holding it back. One of them is the production of fuel, which requires vast amounts of enriched lithium. Enriching lithium has been an environmental catastrophe, but researchers in Texas believe they’ve found a way to do it cheaply and at scale without poisoning the world.

A team of researchers at Texas A&M University discovered the new process by accident while working on a method for cleaning groundwater contaminated during oil and gas extraction. The research has just been published in the scientific journal Chem under the title “Electrochemical 6-Lithium Isotope Enrichment Based on Selective Insertion in 1D Tunnel-Structured V2O5.”

The effect the research has on nuclear fusion might be enormous. “Nuclear fusion is the primary source of energy emitted by stars such as the Sun,” Sarbajit Banerjee, a professor and researcher at ETH Zürich and Texas A&M and one of the authors of the paper, told Gizmodo. The simplest method of doing fusion on Earth instead of space involves tritium and deuterium isotopes. Tritium is rare and radioactive so reactors currently “breed” it on demand to generate energy.

They breed the tritium by bombarding lithium isotopes with neutrons. Most lithium on the planet, more than 90% of it, is lithium-7. Breeding tritium works way more efficiently with the ultra-rare lithium-6. “When 7Li, the most commonly occurring lithium isotope, is used, tritium production is much less efficient as compared to 6Li,” Banerjee said. “As such, modern reactor designs are based on breeding blankets with enriched 6Li isotope that has to be specifically extracted from natural lithium.”

You can turn naturally abundant mixtures of lithium isotopes into Lithium-6, “enriching” it, but the process is a toxic nightmare. “From 1955 to 1963, the United States produced 6Li at the Y12 plant at Oak Ridge National Laboratory in Tennessee for thermonuclear weapons applications, taking advantage of the slight difference in solubility of 6Li and 7Li isotopes in liquid mercury,” Banerjee said. “This did not go so well.”

“About 330 tons of mercury were released to waterways and the process was shut down in 1963 because of environmental concerns,” he said. Mercury is a toxic nightmare substance that’s difficult to clean up. After 60 years, heavy metals from the process of extracting Lithium-6 from naturally abundant mixtures are still poisoning Tennessee today. Cleaning up the remnants of the environmental disaster is a major project for Oak Ridge National Lab’s current residents.

During a different project, the team at Texas A&M developed a compound called zeta-V2O5 that it used to clean groundwater. As it ran water through this membrane it noticed something strange: it was really good at isolating Lithium-6. The team decided to see if it could harvest Lithium-6 from mixtures of Lithium isotopes without mercury.

It worked.

“Our approach uses the essential working principles of lithium-ion batteries and desalination technologies,” Banerjee said. “We insert Li-ions from flowing water streams within the one-dimensional tunnels of zeta-V2O5…our selective Li sponge has a subtle but important preference for 6Li over 7Li that affords a much safer process to extract lithium from water with isotopic selectivity.”

Banerjee this could lead to a massive change in how fuel is developed for fusion generators. It also doesn’t require a massive re-design of the existing reactors. “Our work outlines a path to overcoming a key supply chain issue for fusion. However, to be clear we are not redesigning the actual reactors—tokamaks or stellarators—although there is tremendous recent excitement about new innovations and designs in plasma physics,” he said.

A lot of people are banking on fusion being the path towards cheap and abundant energy. My entire life I’ve heard that the breakthrough that will make it real is “just around the corner.” It’s been a constant refrain that’s become a bit of a joke. Just last year the Bulletin of the Atomic Scientists asked if fusion might be “forever the energy of tomorrow.”

But Banerjee was hopeful. “Despite the incredible challenges, fusion is too big of a prize to give up on,” he said. “The transformative potential has been clear but there have been critical gaps in engineering designs, materials science for extreme environments, and understanding of the complexity of plasma processes to enumerate just a few gaps. There is an intensifying global competition and billions of dollars in private and public investments—while still not imminent, there are promising signs of realistic fusion energy in about two or three decades.”

via Gizmodo https://gizmodo.com/

March 20, 2025 at 09:05AM

No Headphones, No Problem: This Acoustic Trick Bends Sound Through Space to Find You

https://gizmodo.com/no-headphones-no-problem-this-acoustic-trick-bends-sound-through-space-to-find-you-2000578060

What if you could listen to music or a podcast without headphones or earbuds and without disturbing anyone around you? Or have a private conversation in public without other people hearing you?

Our newly published research introduces a way to create audible enclaves – localized pockets of sound that are isolated from their surroundings. In other words, we’ve developed a technology that could create sound exactly where it needs to be.

The ability to send sound that becomes audible only at a specific location could transform entertainment, communication and spatial audio experiences.

What is sound?

Sound is a vibration that travels through air as a wave. These waves are created when an object moves back and forth, compressing and decompressing air molecules.

The frequency of these vibrations is what determines pitch. Low frequencies correspond to deep sounds, like a bass drum; high frequencies correspond to sharp sounds, like a whistle.

Waves of particles moving horizontally, with ridges of compression and valleys of rarefaction
Sound is composed of particles moving in a continuous wave.
Daniel A. Russell, CC BY-NC-ND

Controlling where sound goes is difficult because of a phenomenon called diffraction – the tendency of sound waves to spread out as they travel. This effect is particularly strong for low-frequency sounds because of their longer wavelengths, making it nearly impossible to keep sound confined to a specific area.

Certain audio technologies, such as parametric array loudspeakers, can create focused sound beams aimed in a specific direction. However, these technologies will still emit sound that is audible along its entire path as it travels through space.

The science of audible enclaves

We found a new way to send sound to one specific listener: through self-bending ultrasound beams and a concept called nonlinear acoustics.

Ultrasound refers to sound waves with frequencies above the human hearing range, or above 20 kHz. These waves travel through the air like normal sound waves but are inaudible to people. Because ultrasound can penetrate through many materials and interact with objects in unique ways, it’s widely used for medical imaging and many industrial applications.

In our work, we used ultrasound as a carrier for audible sound. It can transport sound through space silently – becoming audible only when desired. How did we do this?

Normally, sound waves combine linearly, meaning they just proportionally add up into a bigger wave. However, when sound waves are intense enough, they can interact nonlinearly, generating new frequencies that were not present before.

This is the key to our technique: We use two ultrasound beams at different frequencies that are completely silent on their own. But when they intersect in space, nonlinear effects cause them to generate a new sound wave at an audible frequency that would be heard only in that specific region.

Diagram of ultrasound beams bending around a head and intersection in an audible pocket
Audible enclaves are created at the intersection of two ultrasound beams.
Jiaxin Zhong et al./PNAS, CC BY-NC-ND

Crucially, we designed ultrasonic beams that can bend on their own. Normally, sound waves travel in straight lines unless something blocks or reflects them. However, by using acoustic metasurfaces – specialized materials that manipulate sound waves – we can shape ultrasound beams to bend as they travel. Similar to how an optical lens bends light, acoustic metasurfaces change the shape of the path of sound waves. By precisely controlling the phase of the ultrasound waves, we create curved sound paths that can navigate around obstacles and meet at a specific target location.

The key phenomenon at play is what’s called difference frequency generation. When two ultrasonic beams of slightly different frequencies, such as 40 kHz and 39.5 kHz, overlap, they create a new sound wave at the difference between their frequencies – in this case 0.5 kHz, or 500 Hz, which is well within the human hearing range. Sound can be heard only where the beams cross. Outside of that intersection, the ultrasound waves remain silent.

This means you can deliver audio to a specific location or person without disturbing other people as the sound travels.

Advancing sound control

The ability to create audio enclaves has many potential applications.

Audio enclaves could enable personalized audio in public spaces. For example, museums could provide different audio guides to visitors without headphones, and libraries could allow students to study with audio lessons without disturbing others.

In a car, passengers could listen to music without distracting the driver from hearing navigation instructions. Offices and military settings could also benefit from localized speech zones for confidential conversations. Audio enclaves could also be adapted to cancel out noise in designated areas, creating quiet zones to improve focus in workplaces or reduce noise pollution in cities.

This isn’t something that’s going to be on the shelf in the immediate future. For instance, challenges remain for our technology. Nonlinear distortion can affect sound quality. And power efficiency is another issue – converting ultrasound to audible sound requires high-intensity fields that can be energy intensive to generate.

Despite these hurdles, audio enclaves present a fundamental shift in sound control. By redefining how sound interacts with space, we open up new possibilities for immersive, efficient and personalized audio experiences.

Jiaxin Zhong, Postdoctoral Researcher in Acoustics, Penn State and Yun Jing, Professor of Acoustics, Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

via Gizmodo https://gizmodo.com/

March 20, 2025 at 09:05AM