A nerdy father of two, a husband of a beautiful and understanding wife, an engineer who loves anime and tinkering with PCs and games, but most of all, loves God.
Scientists announced on Wednesday (April 2) that they successfully fermented miso aboard the International Space Station, marking the first deliberate food fermentation in space that may open up new culinary possibilities for astronauts on long-term missions.
The traditional Japanese condiment is a fermented soybean paste made by combining cooked soybeans, salt and koji, which is a mold culture typically grown on rice or barley. The fermentation process can last anywhere from a few months to several years, producing a paste with a rich, umami flavor used in soups, sauces and various other dishes. Previous research found that astronauts tend to undereat in space despite having food tailored to their nutritional needs, possibly due to changes in the perceived flavor of the food. Indeed, astronauts themselves have reported a reduced sense of taste and smell while in space, and have said that they prefer salty, spicy and umami-rich foods.
Food fermentation could help address these challenges, and while a few fermented products, such as kimchi and wine, have been sent to the ISS, no actual fermentation process has been carried out in space until now. Joshua Evans, who leads a research group called the Sustainable Food Innovation at the Danish Technical University, and his colleagues set out to determine whether fermentation was possible in space and, if so, how foods fermented in space would compare in taste to their Earth-based counterparts.
In March 2020, the team sent a small container of high-koji, low-salt "miso-to-be" to the ISS to ferment for a month before returning it to Earth.
Two other miso batches that were packed into identical plastic containers and kept frozen until the start of the experiment were fermented here on Earth to act as controls: one in Cambridge, Massachusetts in the U.S., and the other in Copenhagen, Denmark. Once the ISS miso was back on Earth, the team analyzed its microbial communities, flavor compounds and sensory properties.
"Overall, the space miso is a miso," the team wrote in their paper describing the findings.
Packaged miso pre-fermentation on the International Space Station. (Image credit: Jimmy Day)
The researchers found that the ISS miso fermented successfully, and all three samples mostly contained similarly salty umami flavor profiles. The ISS miso is therefore recognizable and safe, the team says, with a specific taste that could satisfy astronauts’ need for flavor while delivering a high nutritional value.
Get the Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
The ISS miso did have a more roasted, nutty flavor than Earth miso does, the researchers noticed, likely due to the effects of microgravity and increased radiation in the low Earth orbit environment where the ISS is. Those conditions could have sped up fermentation, the study notes.
In this photo, the space miso is labeled "861." (Image credit: Maggie Coblentz)Miso gets a close-up. (Image credit: Josh Evans)
Down the line, these findings can be harnessed to create other types of flavorful fermented foods in space.
"Our study opens up new directions to explore how life changes when it travels to new environments like space," Evans said in a statement. "It could invite new forms of culinary expression, expanding and diversifying culinary and cultural representation in space exploration as the field grows."
A paper about this space miso research was published on Wednesday (April 2) in the journal iScience.
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.
Sweetened vanilla, calming lavender, or fragrant jasmine and lotus may fill your home with enticing aromas. But new research shows that the supposed stress-reducing and mood-enhancing effects of scented products may come with unwanted indoor pollution.
“While these products are widely used to create a cozy atmosphere, their emissions can impact indoor air quality, especially in spaces with limited ventilation,” says Nusrat Jung, a civil engineer at Purdue University.
Jung became interested in the quality of our indoor atmosphere after walking through grocery store aisles that had scented candles, wax melts, and other fragrance-releasing items.
“These products are marketed as safe and clean, but we wanted to investigate what else they might be releasing into the air besides pleasant scents,” she says.
Scented Wax Melts and Pollution
In research published recently in Environmental Science & Technology Letters, Jung and her colleagues examined the effects of scented wax melts that are often advertised as pollution-free. They used a laboratory recreation of a typical home at Purdue filled with sensors that could monitor the kinds of chemicals inside.
Scented products have released volatile organic compounds and terpenes — chemicals responsible for everything from aromatic essential oils to the skunk-like smell of marijuana. But previous research by Jung and her colleagues revealed that flame-free candles, or wax melts, release more terpenes than candles with flames.
Once released, terpenes react with ozone in the air and form nanoparticles.
“These particles, despite being formed in a non-combustion process, reached levels that pose potential respiratory risks, challenging the perception of scented wax melts as a benign household product,” Jung says.
While the team’s recent study looked at flame-free scented candles, previous work from Jung examined the impact that other fragrant products have on indoor air quality.
Her team found in an earlier study that hair products like sprays persisted for a while indoors, especially after being exposed to devices like hair curlers or straighteners.
In fact, Jung’s work shows that scented products in general are significant contributors to indoor pollution. In one study they found that scented products can create more breathable nanoparticles than gas stoves or diesel engines.
It may not be limited to homes. Scented products like air fresheners often used in cars release many of the same volatile organic compounds to mask lingering stench — all in a relatively smaller area than your average home. But Jung hasn’t specifically studied these potential impacts, and said further research would be needed to get a clearer idea of any problems they might be causing.
Health Problems from Scented Products
The types of health problems these chemicals can cause isn’t entirely clear, but they may pose issues for our breathing systems — some of them long-term.
“Some [volatile organic compounds] are classified as hazardous air pollutants, while airborne nanoparticles have been linked to lung inflammation, cardiovascular effects, and other adverse health outcomes,” Jung says.
She noted that actual exposure from these potentially harmful chemicals might vary based on a number of factors, though.
Other recent research has found homes vary greatly in their amount of indoor air pollution. Ventilation, occupancy patterns, and household location can all effect how polluted homes are. The authors of that paper say that monitoring indoor pollution in your home is becoming increasingly important, as a result.
“With more time spent working from home, understanding the factors that affect air quality within households is increasingly important,” said Owain Rose, a coauthor of the paper, in a press release.
Jung has recommended always keeping on exhaust fans such as those above stoves or in bathrooms when using these products. But the best thing would be to avoid hair care products, or scented candles, and waxes altogether.
ArticleSources
Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:
Joshua Rapp Learn is an award-winning D.C.-based science writer. An expat Albertan, he contributes to a number of science publications like National Geographic, The New York Times, The Guardian, New Scientist, Hakai, and others.
If you try out Intel’s AI Playground, which incorporates everything from AI art to an LLM chatbot to even text-to-video in a single app, you might think: Wow! OK! An all-in-one local AI app that does everything is worth trying out! And it is… except that it’s made for just a small slice of Intel’s own products.
Quite simply, no single AI app has emerged as the “Amazon” of AI, doing everything you’d want in a single service or site. You can use a tool like Adobe Photoshop or Firefly to perform sophisticated image generations and editing, but chatting is out. ChatGPT or Google Gemini can converse with you, even generating images, but to a limited extent.
Most of these services require you to hopscotch back and forth between sites, however, and can cost money for a subscription. Intel’s AI Playground merges all of these inside a single, well-organized app that runs locally (and entirely privately) on your PC and it’s all for free.
Should I let you in on the catch? I suppose I have to. AI Playground is a showcase for Intel’s Core Ultra processors, including its CPUs and GPUs–the Core Ultra 100 (Meteor Lake) and Core Ultra 200 (Lunar Lake) chips, specifically. But it could be so, so much better if everyone could use it.
Mark Hachman / Foundry
Yes, I realize that some users are quite suspicious of AI. (There are even AI-generated news stories!) Others, however, have found that certain tasks in their daily life such as business email can be handed off to ChatGPT. AI is a tool, even if it can be used in ways we disagree with.
What’s in AI Playground?
AI Playground has three main areas, all designated by tabs on the top of the screen:
Create: An AI image generator, which operates in either a default text-to-image mode, or in a “workflow” mode that uses a more sophisticated back end for higher-quality images
Enhance: Here, you can edit your images, either upscaling them or altering them through generative AI
Answer: A conventional AI chatbot, either as a standalone or with the ability to upload your own text documents
Each of those sections is what you might call self-sufficient, usable by itself. But in the upper right-hand corner is a settings or “gear” icon, which contains a terrific number of additional options, which are absolutely worth examining.
How to set up and install AI Playground
AI Playground’s strength is in its thoughtfulness, ease of use, and simplicity. If you’ve ever used a local AI application, you know that it can be rough. Some functions are content with just a command-line interface, which may require you to have a working knowledge of Python or GitHub. AI Playground was designed around the premise that it will take care of everything with just a single click. Documentation and explanations might be a little lacking in places, but AI Playground’s ease of use is unparalleled.
AI Playground can be downloaded from Intel’s AI Playground page. At press time, AI Playground was on version 2.2.1 beta.
AI Playground’s Setup is pretty easy. Just download what you want. If you choose not to, and need access later, the app will just prompt you to download it at a future time,
Mark Hachman / Foundry
Note that the app and its back-end code require support for either a Core Ultra H (a “Meteor Lake” chip, the Core Ultra 200V) or either of the Intel Arc discrete GPUs, including the Alchemist and Battlemage parts. If you own a massive gaming laptop with a 14th-gen Intel Core chip or an Nvidia RTX 5090 GPU, you’re out of luck. Same with the Core Ultra 200H or “Arrow Lake.”
Since this is an “AI Playground,” you might think that the chip’s NPU would be used? Nope. All of these applications tap just the chip’s integrated GPU and I didn’t see the NPU being accessed once via Windows Task Manager.
Also, keep in mind that the GPU’s UMA frame buffer, the memory pool that’s shared between system memory and the integrated GPU, is what these AI models depend on. Intel’s integrated graphics shares half the available system memory with the system memory, as a unified memory architecture or UMA. Discrete GPUs have their own dedicated VRAM memory to pull from. The bottom line? You may not have enough video memory available to run every model.
Downloading the initial AI Playground application took about 680 megabytes on my machine. But that’s only the shell application. The models require an additional download, which will either be handled by the AI Installer application itself or may require you to click the “download” button itself.
The nice thing is that you don’t have to manage any of this. If AI Playground needs a model, it will tell you which one it requires and how much space on your hard drive it requires. None of the models I saw used more than 12GB of storage space and many much less. But if you want to try out a number of models, be prepared to download a couple dozen gigabytes or more.
Playing with AI Playground
I’ve called Fooocus the easiest way to generate AI art on your PC. For its time, it was! And it works with just about any GPU, too. But AI Playground may be even easier. The tab opens with just the space for a prompt and nothing else.
Like most AI art, the prompt defines the image and you can get really detailed. Here’s an example: “Award winning photo of a high speed purple sports car, hyper-realism, racing fast over wet track at night. The license plate number is “B580?, motion blur, expansive glowing cityscape, neon lights…”
The Settings gear in the upper right-hand corner opens up this options menu, with numerous tweaks. My advice is to experiment.
Mark Hachman / Foundry
Enter a prompt and AI Playground will draw four small images, which appear in a vertical column to the left. Each image progresses in a series of steps with 20 as the default. After the image is completed, some small icons will appear next to it with additional options, including importing it into the “Enhance” tab.
The Settings gear is where you can begin tweaking your output. You can select from either “Standard” or “HD” resolution, which adjusts the “Image Size” field. You can adjust image size and resolution, and tweak the format. The “HD” option requires you to download a different model, as does the ‘Workflow” option to the upper right, which adds workflows based on ComfyUI. Essentially, they’re just better looking images with the option to guide the output with a reference image or other workflow.
Some of the models are trained on public figures and celebrities. But the quality falls to the level of “AI slop” in places.
Mark Hachman / Foundry
For now, the default model can be adjusted via the “Manual” tab, which opens up two additional options. You’ll see a “negative prompt,” which excludes things that you put in, and a “Safe Check” to turn off gore and other disturbing images. By default, “NSFW” (Not Safe for Work) is added to the negative prompt.
Both the Safe Check and NSFW negative prompt only appear as options in the Default image generator and seem to be on by default elsewhere. It’s up to you whether or not to remove them. The Default model (Lykon/dreamshaper-8) has apparently been trained on nudity and celebrities, though I stuck to public figures for testing purposes.
Note that all of your AI-generated art stays local to your PC, though Intel (obviously) warns you not to use a person’s likeness without their permission.
There’s also a jaw-droppingly obvious bug that I can’t believe Intel didn’t catch. Creating an HD image often begins its images with “UPLOAD” projected over the image, and sometimes renders the final image with it on, too. Why? Because there’s a field to add a reference image and UPLOAD is right in the middle of it. Somehow, AI Playground used the UPLOAD font as part of the image.
Mark Hachman / Foundry
Though my test machine was a Core Ultra 258V (Lunar Lake) with 32GB of RAM, an 896×576 image took 29 seconds to generate, with 25 rendering steps on the Default Mode. Using the Workflow (Line2-Image-HD-Quality) model at 1280×832 resolution and 20 steps, one image took two minutes 12 seconds to render. There’s also a Fast mode which should lower the rendering time, though I didn’t really like the output quality.
If you find an image you like, you can use the Enhance tab to upscale it. (Upscaling is being added to the Windows Photos app, which will eventually be made available to Copilot+ PCs using Intel Core Ultra 200 chips, too.) You can also use “inpainting,” which allows you to re-generate a portion of the screen, and “outpainting,” the technique which was used to “expand” the boundaries of the Mona Lisa painting, for example. You can also ask AI to tweak the image itself, though I had problems trying to generate a satisfactory result.
The Enhance tab of Intel’s AI Playground, where you can upscale images and make adjustments. I’ve had more luck with inpainting and outpainting then tweaking the entire image with an image prompt.
Mark Hachman / Foundry
The “Workflow” tab also hides some interesting utilities such as a “face swap” app and a way to “colorize” black-and-white photos. I was disappointed to see that a “text to video” model didn’t work, presumably because my PC was running on integrated graphics.
The “Answer” or chatbot portion of the AI Playground seems to be the weakest option. The default model, by Microsoft (Phi-3-mini-4K-instruct) refused to answer the dumb comic-book-nerd question, “Who would win in a fight, Wonder Woman or Iron Man?”
It’s not shown here, but you can turn on performance metrics to track how many tokens per second the model runs. There’s also a RAG option that can be used to upload documents, but it doesn’t work on the current release.
Mark Hachman / Foundry
It continued.
“What is the best car for an old man? Sorry, I can’t help with that.”
“What’s better, celery or potatoes? I’m sorry, I can’t assist with that. As an AI, I don’t have personal preferences.”
And so on. Switching to a different model which used the OpenVINO programming language, though, helped. There, the OpenVINO/Phi-3.5-mini-instruct-int4 model took 1.21 seconds to generate a response token, producing tokens to the tune of about 20 tokens per second. (A token isn’t quite the length of a word, but it’s a good rule of thumb.) I was also able to do some “vibe coding” — generating code via AI without the faintest clue what you’re doing. By default, the output is just a few hundred tokens, but that can be adjusted via a slider.
You can also just import your own model, too, by dropping a GGUF file (the file format for inference engines) into the appropriate folder.
Adapt AI Playground to AMD and Nvidia, please!
For all that, I really like AI Playground. Some people are notably (justifiably?) skeptical of AI, especially how AI can make mistakes and replace the authentic output of human artists. I’m not here to argue either side.
What Intel has done, however, is create a surprisingly good general-purpose and enthusiast application for exploring AI, that receives frequent updates and seems to be consistently improving.
The best thing about AI Playground? It’s open source, meaning that someone could probably come up with a fork that allows for more GPUs and CPUs to be implemented. From what I can see, it just hasn’t happened yet. If it did, it could be the single unified local AI app I’ve been waiting for.
The promise of fusion energy is cheap and abundant power for the entire planet. Scientists have made startling advances towards achieving it at scale, but there are still many problems holding it back. One of them is the production of fuel, which requires vast amounts of enriched lithium. Enriching lithium has been an environmental catastrophe, but researchers in Texas believe they’ve found a way to do it cheaply and at scale without poisoning the world.
A team of researchers at Texas A&M University discovered the new process by accident while working on a method for cleaning groundwater contaminated during oil and gas extraction. The research has just been published in the scientific journal Chem under the title “Electrochemical 6-Lithium Isotope Enrichment Based on Selective Insertion in 1D Tunnel-Structured V2O5.”
The effect the research has on nuclear fusion might be enormous. “Nuclear fusion is the primary source of energy emitted by stars such as the Sun,” Sarbajit Banerjee, a professor and researcher at ETH Zürich and Texas A&M and one of the authors of the paper, told Gizmodo. The simplest method of doing fusion on Earth instead of space involves tritium and deuterium isotopes. Tritium is rare and radioactive so reactors currently “breed” it on demand to generate energy.
They breed the tritium by bombarding lithium isotopes with neutrons. Most lithium on the planet, more than 90% of it, is lithium-7. Breeding tritium works way more efficiently with the ultra-rare lithium-6. “When 7Li, the most commonly occurring lithium isotope, is used, tritium production is much less efficient as compared to 6Li,” Banerjee said. “As such, modern reactor designs are based on breeding blankets with enriched 6Li isotope that has to be specifically extracted from natural lithium.”
You can turn naturally abundant mixtures of lithium isotopes into Lithium-6, “enriching” it, but the process is a toxic nightmare. “From 1955 to 1963, the United States produced 6Li at the Y12 plant at Oak Ridge National Laboratory in Tennessee for thermonuclear weapons applications, taking advantage of the slight difference in solubility of 6Li and 7Li isotopes in liquid mercury,” Banerjee said. “This did not go so well.”
“About 330 tons of mercury were released to waterways and the process was shut down in 1963 because of environmental concerns,” he said. Mercury is a toxic nightmare substance that’s difficult to clean up. After 60 years, heavy metals from the process of extracting Lithium-6 from naturally abundant mixtures are still poisoning Tennessee today. Cleaning up the remnants of the environmental disaster is a major project for Oak Ridge National Lab’s current residents.
During a different project, the team at Texas A&M developed a compound called zeta-V2O5 that it used to clean groundwater. As it ran water through this membrane it noticed something strange: it was really good at isolating Lithium-6. The team decided to see if it could harvest Lithium-6 from mixtures of Lithium isotopes without mercury.
It worked.
“Our approach uses the essential working principles of lithium-ion batteries and desalination technologies,” Banerjee said. “We insert Li-ions from flowing water streams within the one-dimensional tunnels of zeta-V2O5…our selective Li sponge has a subtle but important preference for 6Li over 7Li that affords a much safer process to extract lithium from water with isotopic selectivity.”
Banerjee this could lead to a massive change in how fuel is developed for fusion generators. It also doesn’t require a massive re-design of the existing reactors. “Our work outlines a path to overcoming a key supply chain issue for fusion. However, to be clear we are not redesigning the actual reactors—tokamaks or stellarators—although there is tremendous recent excitement about new innovations and designs in plasma physics,” he said.
A lot of people are banking on fusion being the path towards cheap and abundant energy. My entire life I’ve heard that the breakthrough that will make it real is “just around the corner.” It’s been a constant refrain that’s become a bit of a joke. Just last year the Bulletin of the Atomic Scientists asked if fusion might be “forever the energy of tomorrow.”
But Banerjee was hopeful. “Despite the incredible challenges, fusion is too big of a prize to give up on,” he said. “The transformative potential has been clear but there have been critical gaps in engineering designs, materials science for extreme environments, and understanding of the complexity of plasma processes to enumerate just a few gaps. There is an intensifying global competition and billions of dollars in private and public investments—while still not imminent, there are promising signs of realistic fusion energy in about two or three decades.”
What if you could listen to music or a podcast without headphones or earbuds and without disturbing anyone around you? Or have a private conversation in public without other people hearing you?
Our newly published research introduces a way to create audible enclaves – localized pockets of sound that are isolated from their surroundings. In other words, we’ve developed a technology that could create sound exactly where it needs to be.
The ability to send sound that becomes audible only at a specific location could transform entertainment, communication and spatial audio experiences.
What is sound?
Sound is a vibration that travels through air as a wave. These waves are created when an object moves back and forth, compressing and decompressing air molecules.
The frequency of these vibrations is what determines pitch. Low frequencies correspond to deep sounds, like a bass drum; high frequencies correspond to sharp sounds, like a whistle.
Controlling where sound goes is difficult because of a phenomenon called diffraction – the tendency of sound waves to spread out as they travel. This effect is particularly strong for low-frequency sounds because of their longer wavelengths, making it nearly impossible to keep sound confined to a specific area.
Certain audio technologies, such as parametric array loudspeakers, can create focused sound beams aimed in a specific direction. However, these technologies will still emit sound that is audible along its entire path as it travels through space.
The science of audible enclaves
We found a new way to send sound to one specific listener: through self-bending ultrasound beams and a concept called nonlinear acoustics.
Ultrasound refers to sound waves with frequencies above the human hearing range, or above 20 kHz. These waves travel through the air like normal sound waves but are inaudible to people. Because ultrasound can penetrate through many materials and interact with objects in unique ways, it’s widely used for medical imaging and many industrial applications.
In our work, we used ultrasound as a carrier for audible sound. It can transport sound through space silently – becoming audible only when desired. How did we do this?
Normally, sound waves combine linearly, meaning they just proportionally add up into a bigger wave. However, when sound waves are intense enough, they can interact nonlinearly, generating new frequencies that were not present before.
This is the key to our technique: We use two ultrasound beams at different frequencies that are completely silent on their own. But when they intersect in space, nonlinear effects cause them to generate a new sound wave at an audible frequency that would be heard only in that specific region.
Crucially, we designed ultrasonic beams that can bend on their own. Normally, sound waves travel in straight lines unless something blocks or reflects them. However, by using acoustic metasurfaces – specialized materials that manipulate sound waves – we can shape ultrasound beams to bend as they travel. Similar to how an optical lens bends light, acoustic metasurfaces change the shape of the path of sound waves. By precisely controlling the phase of the ultrasound waves, we create curved sound paths that can navigate around obstacles and meet at a specific target location.
The key phenomenon at play is what’s called difference frequency generation. When two ultrasonic beams of slightly different frequencies, such as 40 kHz and 39.5 kHz, overlap, they create a new sound wave at the difference between their frequencies – in this case 0.5 kHz, or 500 Hz, which is well within the human hearing range. Sound can be heard only where the beams cross. Outside of that intersection, the ultrasound waves remain silent.
This means you can deliver audio to a specific location or person without disturbing other people as the sound travels.
Advancing sound control
The ability to create audio enclaves has many potential applications.
Audio enclaves could enable personalized audio in public spaces. For example, museums could provide different audio guides to visitors without headphones, and libraries could allow students to study with audio lessons without disturbing others.
In a car, passengers could listen to music without distracting the driver from hearing navigation instructions. Offices and military settings could also benefit from localized speech zones for confidential conversations. Audio enclaves could also be adapted to cancel out noise in designated areas, creating quiet zones to improve focus in workplaces or reduce noise pollution in cities.
This isn’t something that’s going to be on the shelf in the immediate future. For instance, challenges remain for our technology. Nonlinear distortion can affect sound quality. And power efficiency is another issue – converting ultrasound to audible sound requires high-intensity fields that can be energy intensive to generate.
Despite these hurdles, audio enclaves present a fundamental shift in sound control. By redefining how sound interacts with space, we open up new possibilities for immersive, efficient and personalized audio experiences.
It turns out Tesla’s camera-vision-only approach to self-driving is no match for a Wile E. Coyote-style fake wall. Earlier this week, former NASA engineer and YouTuber Mark Rober posted a video where he tried to see if he could trick a Tesla Model Y using its Autopilot driver-assist function into driving through a Styrofoam wall disguised to look like part of the road in front of it. The Tesla hurls towards the wall at 40 mph and, rather than stopping, plows straight through it, leaving a giant hole.
“It turns out my Tesla is less Road Runner, more Wile E. Coyote,” Rober says as he inspects the damage on the front hood. The video, posted only a couple days ago, had racked up over 20 million views by Wednesday morning.
Could Lidar have detected the ‘wall?’
The stunt draws inspiration from an iconic Looney Tunes skit in the Road Runner Show. In the cartoon, Wile E. Coyote tried to set a trap for Road Runner by painting what looks like a tunnel entrance into the side of a boulder hoping it will stop him dead in his tracks. Road Runner zooms around the corner and passes right through. When he furiously follows in hot pursuit, Wile E. smacks into the fake tunnel opening face first. Alas, another victory for “Beep Beep.”
Rober is convinced the culprit for the crash in his case resides in Autopilot’s lack of Lidar sensors. Lidar, which stands for Light Detection and Ranging, works by sending out millions of laser pulses in all directions around a vehicle and measuring how quickly they bounce back. That information is used to rapidly create a 3D map of the vehicle’s surroundings and help it avoid obstacles like pedestrians, animals, or—in this case—a camouflaged wall. Most people will recognize Lidar as the spinning tops fastened on the roof of driverless vehicles.
Though most high-level autonomous vehicle systems on the road today like Waymo use Lidar prominently, Tesla has long bucked that trend in an effort to one day create “full autonomy” using only camera vision. Elon Musk, the company’s CEO, has been outspoken about this approach, repeatedly criticizing Lidar as a “crutch” and a “fool’s errand.” In the video, Rober explains why he believes that so-called “crutch” could have prevented his Tesla from crashing through the wall.
“While that [the fake wall] sort of looks convincing, the image processing in our brains is advanced enough that we pick up on the minor visual inconsistencies and we wouldn’t hit it,” Rober said. “A car with Lidar would stop because it is using a point cloud that detects a wall without seeing the image at all.”
To buttress that point, Rober repeated the same test using a Lexus RX-based prototype equipped with Lidar. In that case, the Lexus detected the wall and slowed to a stop before making contact. Rober ran several additional tests, including seeing whether the vehicle would stop for a mannequin standing in the road under both clear conditions and in rain and fog. The Lexus stopped in both scenarios, but the Tesla on Autopilot struggled to detect the mannequin in adverse weather conditions.
Tesla’s lack of Lidar has draw regulatory scrutiny
Though Rober’s results are pretty funny they point to a real debate raging among autonomous vehicle developers—often with serious consequences. Last year, the National Highway Traffic Safety Administration opened up a new federal investigation into Tesla’s Full-Self-Driving (FSD) feature, a supposedly more advanced version of Autopilot, following numerous reports of crashes in poor visibility settings. One of those crash reports resulted in the death of a pedestrian. Tesla did not respond to our request for comment.
Multiple autonomous driving experts previously speaking with Popular Science did not completely rule out the possibility of autonomous systems driven primarily by camera vision. Still, they pointed to several real-world examples—including one where a Tesla on Autopilot plowed through a deer without stopping—as potentially tied to the lack of Lidar.
“[LiDAR is] going to tell you how quickly that object is moving from space,” University of San Francisco Professor and autonomous vehicle expert William Riggs previously told Popular Science. “And it’s not going to estimate it like a camera would do when a Tesla is using FSD.”
When DeepSeek-R1 released back in January, it was incredibly hyped up. This reasoning model could be distilled down to work with smaller large language models (LLMs) on consumer-grade laptops. If you believed the headlines, you’d think it’s now possible to run AI models that are competitive with ChatGPT right on your toaster.
That just isn’t true, though. I tried running LLMs locally on a typical Windows laptop and the whole experience still kinda sucks. There are still a handful of problems that keep rearing their heads.
Problem #1: Small LLMs are stupid
Newer open LLMs often brag about big benchmark improvements, and that was certainly the case with DeepSeek-R1, which came close to OpenAI’s o1 in some benchmarks.
But the model you run on your Windows laptop isn’t the same one that’s scoring high marks. It’s a much smaller, more condensed model—and smaller versions of large language models aren’t very smart.
Just look at what happened when I asked DeepSeek-R1-Llama-8B how the chicken crossed the road:
Matt Smith / Foundry
This simple question—and the LLM’s rambling answer—shows how smaller models can easily go off the rails. They frequently fail to notice context or pick up on nuances that should seem obvious.
In fact, recent research suggests that less intelligent large language models with reasoning capabilities are prone to such faults. I recently wrote about the issue of overthinking in AI reasoning models and how they lead to increased computational costs.
I’ll admit that the chicken example is a silly one. How about we try a more practical task? Like coding a simple website in HTML. I created a fictional resume using Anthropic’s Claude 3.7 Sonnet, then asked Qwen2.5-7B-Instruct to create a HTML website based on the resume.
The results were far from great:
Matt Smith / Foundry
To be fair, it’s better than what I could create if you sat me down at a computer without an internet connection and asked me to code a similar website. Still, I don’t think most people would want to use this resume to represent themselves online.
A larger and smarter model, like Anthropic’s Claude 3.7 Sonnet, can generate a higher quality website. I could still criticize it, but my issues would be more nuanced and less to do with glaring flaws. Unlike Qwen’s output, I expect a lot of people would be happy using the website Claude created to represent themselves online.
And, for me, that’s not speculation. That’s actually what happened. Several months ago, I ditched WordPress and switched to a simple HTML website that was coded by Claude 3.5 Sonnet.
Problem #2: Local LLMs need lots of RAM
OpenAI’s CEO Sam Altman is constantly chin-wagging about the massive data center and infrastructure investments required to keep AI moving forward. He’s biased, of course, but he’s right about one thing: the largest and smartest large language models, like GPT-4, do require data center hardware with compute and memory far beyond that of even the most extravagant consumer PCs.
And it isn’t just limited to the best large language models. Even smaller and dumber models can still push a modern Windows laptop to its limits, with RAM often being the greatest limiter of performance.
Matt Smith / Foundry
The “size” of a large language model is measured by its parameters, where each parameter is a distinct variable used by the model to generate output. In general, more parameters mean smarter output—but those parameters need to be stored somewhere, so adding parameters to a model increases its storage and memory requirements.
Smaller LLMs with 7 or 8 billion parameters tend to weigh in at 4.5 to 5 GB. That’s not huge, but the entire model must be loaded into memory (i.e., RAM) and sit there for as long as the model is in use. That’s a big chunk of RAM to reserve for a single piece of software.
While it’s technically possible to run an AI model with 7 billion parameters on a laptop with 16GB of RAM, you’ll more realistically need 32GB (unless the LLM is the only piece of software you’ll have opened). Even the Surface Laptop 7 that I use to test local LLMs, which has 32GB of RAM, can run out of available memory if I have a video editing app or several dozen browser tabs open while the AI model is active.
Problem #3: Local LLMs are awfully slow
Configuring a Windows laptop with more RAM might seem like an easy (though expensive) solution to Problem #2. If you do that, however, you’ll run straight into another issue: modern Windows laptops lack the compute performance required by LLMs.
I experienced this problem with the HP Elitebook X G1a, a speedy laptop with an AMD Ryzen AI processor that includes capable integrated graphics and an integrated neural processing unit. It also has 64GB of RAM, so I was able to load Llama 3.3 with 70 billion parameters (which eats up about 40GB of memory).
The fictional resume HTML generation took 66.61 seconds to first token and an additional 196.7 seconds for the rest. That’s significantly slower than, say, ChatGPT.
Matt Smith / Foundry
Yet even with that much memory, Llama 3.3-70B still wasn’t usable. Sure, I could technically load it, but it could only output 1.68 tokens per second. (It takes about 1 to 3 tokens per word in a text reply, so even a short reply can take a minute or more to generate.)
More powerful hardware could certainly help, but it’s not a simple solution. There’s currently no universal API that can run all LLMs on all hardware, so it’s often not possible to properly tap into all the compute resources available on a laptop.
Problem #4: LM Studio, Ollama, GPT4All are no match for ChatGPT
Everything I’ve complained about up to this point could theoretically be improved with hardware and APIs that make it easier for LLMs to utilize a laptop’s compute resources. But even if all that were to fall into place, you’d still have to wrestle with the unintuitive software.
By software, I mean the interface used to communicate with these LLMs. Many options exist, including LM Studio, Ollama, and GPT4All. They’re free and impressive—GPT4All is surprisingly easy—but they just aren’t as capable or easy-to-use as ChatGPT, Anthropic, and other leaders.
Managing and selecting local LLMs using LM Studio is far less intuitive than loading up a mainstream AI chatbot like ChatGPT, Copilot, or Claude.
Matt Smith / Foundry
Plus, local LLMs are less likely to be multimodal, meaning most of them can’t work with images or audio. Most LLM interfaces support some form of RAG to let you “talk” with documents, but context windows tend to be small and document support is often limited. Local LLMs also lack the cutting-edge features of larger online-only LLMs, like OpenAI’s Advanced Voice Mode and Claude’s Artifacts.
I’m not trying to throw shade at local LLM software. The leading options are rather good, plus they’re free. But the honest truth is that it’s hard for free software to keep up with rich tech giants—and it shows.
Solutions are coming, but it’ll be a long time before they get here
The biggest problem of all is that there’s currently no way to solve any of the above problems.
RAM is going to be an issue for a while. As of this writing, the most powerful Windows laptops top out at 128GB of RAM. Meanwhile, Apple just released the M3 Ultra, which can support up to 512GB of unified memory (but you’ll pay at least $9,499 to snag it).
Compute performance faces bottlenecks, too. A laptop with an RTX 4090 (soon to be superseded by the RTX 5090) might look like the best option for running an LLM—and maybe it is—but you still have to load the LLM into the GPU’s memory. An RTX 5090 will offer 24GB of GDDR7 memory, which is relatively a lot but still limited and only able to support AI models up to around 32 billion parameters (like QwQ 32B).
Even if you ignore the hardware limitations, it’s unclear if software for running locally hosted LLMs will keep up with cloud-based subscription services. (Paid software for running local LLMs is a thing but, as far as I’m aware, only in the enterprise market.) For local LLMs to catch up with their cloud siblings, we’ll need software that’s easy to use and frequently updated with features close to what cloud services provide.
These problems will probably be fixed with time. But if you’re thinking about trying a local LLM on your laptop right now, don’t bother. It’s fun and novel but far from productive. I still recommend sticking with online-only models like GPT-4.5 and Claude 3.7 Sonnet for now.