No Headphones, No Problem: This Acoustic Trick Bends Sound Through Space to Find You

https://gizmodo.com/no-headphones-no-problem-this-acoustic-trick-bends-sound-through-space-to-find-you-2000578060

What if you could listen to music or a podcast without headphones or earbuds and without disturbing anyone around you? Or have a private conversation in public without other people hearing you?

Our newly published research introduces a way to create audible enclaves – localized pockets of sound that are isolated from their surroundings. In other words, we’ve developed a technology that could create sound exactly where it needs to be.

The ability to send sound that becomes audible only at a specific location could transform entertainment, communication and spatial audio experiences.

What is sound?

Sound is a vibration that travels through air as a wave. These waves are created when an object moves back and forth, compressing and decompressing air molecules.

The frequency of these vibrations is what determines pitch. Low frequencies correspond to deep sounds, like a bass drum; high frequencies correspond to sharp sounds, like a whistle.

Waves of particles moving horizontally, with ridges of compression and valleys of rarefaction
Sound is composed of particles moving in a continuous wave.
Daniel A. Russell, CC BY-NC-ND

Controlling where sound goes is difficult because of a phenomenon called diffraction – the tendency of sound waves to spread out as they travel. This effect is particularly strong for low-frequency sounds because of their longer wavelengths, making it nearly impossible to keep sound confined to a specific area.

Certain audio technologies, such as parametric array loudspeakers, can create focused sound beams aimed in a specific direction. However, these technologies will still emit sound that is audible along its entire path as it travels through space.

The science of audible enclaves

We found a new way to send sound to one specific listener: through self-bending ultrasound beams and a concept called nonlinear acoustics.

Ultrasound refers to sound waves with frequencies above the human hearing range, or above 20 kHz. These waves travel through the air like normal sound waves but are inaudible to people. Because ultrasound can penetrate through many materials and interact with objects in unique ways, it’s widely used for medical imaging and many industrial applications.

In our work, we used ultrasound as a carrier for audible sound. It can transport sound through space silently – becoming audible only when desired. How did we do this?

Normally, sound waves combine linearly, meaning they just proportionally add up into a bigger wave. However, when sound waves are intense enough, they can interact nonlinearly, generating new frequencies that were not present before.

This is the key to our technique: We use two ultrasound beams at different frequencies that are completely silent on their own. But when they intersect in space, nonlinear effects cause them to generate a new sound wave at an audible frequency that would be heard only in that specific region.

Diagram of ultrasound beams bending around a head and intersection in an audible pocket
Audible enclaves are created at the intersection of two ultrasound beams.
Jiaxin Zhong et al./PNAS, CC BY-NC-ND

Crucially, we designed ultrasonic beams that can bend on their own. Normally, sound waves travel in straight lines unless something blocks or reflects them. However, by using acoustic metasurfaces – specialized materials that manipulate sound waves – we can shape ultrasound beams to bend as they travel. Similar to how an optical lens bends light, acoustic metasurfaces change the shape of the path of sound waves. By precisely controlling the phase of the ultrasound waves, we create curved sound paths that can navigate around obstacles and meet at a specific target location.

The key phenomenon at play is what’s called difference frequency generation. When two ultrasonic beams of slightly different frequencies, such as 40 kHz and 39.5 kHz, overlap, they create a new sound wave at the difference between their frequencies – in this case 0.5 kHz, or 500 Hz, which is well within the human hearing range. Sound can be heard only where the beams cross. Outside of that intersection, the ultrasound waves remain silent.

This means you can deliver audio to a specific location or person without disturbing other people as the sound travels.

Advancing sound control

The ability to create audio enclaves has many potential applications.

Audio enclaves could enable personalized audio in public spaces. For example, museums could provide different audio guides to visitors without headphones, and libraries could allow students to study with audio lessons without disturbing others.

In a car, passengers could listen to music without distracting the driver from hearing navigation instructions. Offices and military settings could also benefit from localized speech zones for confidential conversations. Audio enclaves could also be adapted to cancel out noise in designated areas, creating quiet zones to improve focus in workplaces or reduce noise pollution in cities.

This isn’t something that’s going to be on the shelf in the immediate future. For instance, challenges remain for our technology. Nonlinear distortion can affect sound quality. And power efficiency is another issue – converting ultrasound to audible sound requires high-intensity fields that can be energy intensive to generate.

Despite these hurdles, audio enclaves present a fundamental shift in sound control. By redefining how sound interacts with space, we open up new possibilities for immersive, efficient and personalized audio experiences.

Jiaxin Zhong, Postdoctoral Researcher in Acoustics, Penn State and Yun Jing, Professor of Acoustics, Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

via Gizmodo https://gizmodo.com/

March 20, 2025 at 09:05AM

Man tests if Tesla on Autopilot will slam through foam wall (spoiler: it did)

https://www.popsci.com/technology/mark-rober-tesla-foam-wall-video/

It turns out Tesla’s camera-vision-only approach to self-driving is no match for a Wile E. Coyote-style fake wall. Earlier this week, former NASA engineer and YouTuber Mark Rober posted a video where he tried to see if he could trick a Tesla Model Y using its Autopilot driver-assist function into driving through a Styrofoam wall disguised to look like part of the road in front of it. The Tesla hurls towards the wall at 40 mph and, rather than stopping, plows straight through it, leaving a giant hole.

“It turns out my Tesla is less Road Runner, more Wile E. Coyote,” Rober says as he inspects the damage on the front hood. The video, posted only a couple days ago, had racked up over 20 million views by Wednesday morning. 

Could Lidar have detected the ‘wall?’

The stunt draws inspiration from an iconic Looney Tunes skit in the Road Runner Show. In the cartoon, Wile E. Coyote tried to set a trap for Road Runner by painting what looks like a tunnel entrance into the side of a boulder hoping it will stop him dead in his tracks. Road Runner zooms around the corner and passes right through. When he furiously follows in hot pursuit, Wile E. smacks into the fake tunnel opening face first. Alas, another victory for “Beep Beep.”

Rober is convinced the culprit for the crash in his case resides in Autopilot’s lack of Lidar sensors. Lidar, which stands for Light Detection and Ranging, works by sending out millions of laser pulses in all directions around a vehicle and measuring how quickly they bounce back. That information is used to rapidly create a 3D map of the vehicle’s surroundings and help it avoid obstacles like pedestrians, animals, or—in this case—a camouflaged wall. Most people will recognize Lidar as the spinning tops fastened on the roof of driverless vehicles. 

Though most high-level autonomous vehicle systems on the road today like Waymo use Lidar prominently, Tesla has long bucked that trend in an effort to one day create “full autonomy” using only camera vision. Elon Musk, the company’s CEO, has been outspoken about this approach, repeatedly criticizing Lidar as a “crutch” and a “fool’s errand.” In the video, Rober explains why he believes that so-called “crutch” could have prevented his Tesla from crashing through the wall.

“While that [the fake wall] sort of looks convincing, the image processing in our brains is advanced enough that we pick up on the minor visual inconsistencies and we wouldn’t hit it,” Rober said. “A car with Lidar would stop because it is using a point cloud that detects a wall without seeing the image at all.” 

To buttress that point, Rober repeated the same test using a Lexus RX-based prototype equipped with Lidar. In that case, the Lexus detected the wall and slowed to a stop before making contact. Rober ran several additional tests, including seeing whether the vehicle would stop for a mannequin standing in the road under both clear conditions and in rain and fog. The Lexus stopped in both scenarios, but the Tesla on Autopilot struggled to detect the mannequin in adverse weather conditions.

Related: [Why are ‘driverless’ cars still hitting things?]

Tesla’s lack of Lidar has draw regulatory scrutiny 

Though Rober’s results are pretty funny they point to a real debate raging among autonomous vehicle developers—often with serious consequences. Last year, the National Highway Traffic Safety Administration opened up a new federal investigation into Tesla’s Full-Self-Driving (FSD) feature, a supposedly more advanced version of Autopilot, following numerous reports of crashes in poor visibility settings. One of those crash reports resulted in the death of a pedestrian. Tesla did not respond to our request for comment. 

Multiple autonomous driving experts previously speaking with Popular Science did not completely rule out the possibility of autonomous systems driven primarily by camera vision. Still, they pointed to several real-world examples—including one where a Tesla on Autopilot plowed through a deer without stopping—as potentially tied to the lack of Lidar.

“[LiDAR is] going to tell you how quickly that object is moving from space,” University of San Francisco Professor and autonomous vehicle expert William Riggs previously told Popular Science. “And it’s not going to estimate it like a camera would do when a Tesla is using FSD.”

The post Man tests if Tesla on Autopilot will slam through foam wall (spoiler: it did) appeared first on Popular Science.

via Popular Science – New Technology, Science News, The Future Now https://www.popsci.com

March 19, 2025 at 11:58AM

I tried running AI chatbots locally on my laptop — and they kinda suck

https://www.pcworld.com/article/2635208/i-tried-running-ai-chatbots-locally-on-my-laptop-and-they-kinda-suck.html

When DeepSeek-R1 released back in January, it was incredibly hyped up. This reasoning model could be distilled down to work with smaller large language models (LLMs) on consumer-grade laptops. If you believed the headlines, you’d think it’s now possible to run AI models that are competitive with ChatGPT right on your toaster.

That just isn’t true, though. I tried running LLMs locally on a typical Windows laptop and the whole experience still kinda sucks. There are still a handful of problems that keep rearing their heads.

Problem #1: Small LLMs are stupid

Newer open LLMs often brag about big benchmark improvements, and that was certainly the case with DeepSeek-R1, which came close to OpenAI’s o1 in some benchmarks.

But the model you run on your Windows laptop isn’t the same one that’s scoring high marks. It’s a much smaller, more condensed model—and smaller versions of large language models aren’t very smart.

Just look at what happened when I asked DeepSeek-R1-Llama-8B how the chicken crossed the road:

Matt Smith / Foundry

This simple question—and the LLM’s rambling answer—shows how smaller models can easily go off the rails. They frequently fail to notice context or pick up on nuances that should seem obvious.

In fact, recent research suggests that less intelligent large language models with reasoning capabilities are prone to such faults. I recently wrote about the issue of overthinking in AI reasoning models and how they lead to increased computational costs.

I’ll admit that the chicken example is a silly one. How about we try a more practical task? Like coding a simple website in HTML. I created a fictional resume using Anthropic’s Claude 3.7 Sonnet, then asked Qwen2.5-7B-Instruct to create a HTML website based on the resume.

The results were far from great:

Matt Smith / Foundry

To be fair, it’s better than what I could create if you sat me down at a computer without an internet connection and asked me to code a similar website. Still, I don’t think most people would want to use this resume to represent themselves online.

A larger and smarter model, like Anthropic’s Claude 3.7 Sonnet, can generate a higher quality website. I could still criticize it, but my issues would be more nuanced and less to do with glaring flaws. Unlike Qwen’s output, I expect a lot of people would be happy using the website Claude created to represent themselves online.

And, for me, that’s not speculation. That’s actually what happened. Several months ago, I ditched WordPress and switched to a simple HTML website that was coded by Claude 3.5 Sonnet.

Problem #2: Local LLMs need lots of RAM

OpenAI’s CEO Sam Altman is constantly chin-wagging about the massive data center and infrastructure investments required to keep AI moving forward. He’s biased, of course, but he’s right about one thing: the largest and smartest large language models, like GPT-4, do require data center hardware with compute and memory far beyond that of even the most extravagant consumer PCs.

And it isn’t just limited to the best large language models. Even smaller and dumber models can still push a modern Windows laptop to its limits, with RAM often being the greatest limiter of performance.

Matt Smith / Foundry

The “size” of a large language model is measured by its parameters, where each parameter is a distinct variable used by the model to generate output. In general, more parameters mean smarter output—but those parameters need to be stored somewhere, so adding parameters to a model increases its storage and memory requirements.

Smaller LLMs with 7 or 8 billion parameters tend to weigh in at 4.5 to 5 GB. That’s not huge, but the entire model must be loaded into memory (i.e., RAM) and sit there for as long as the model is in use. That’s a big chunk of RAM to reserve for a single piece of software.

While it’s technically possible to run an AI model with 7 billion parameters on a laptop with 16GB of RAM, you’ll more realistically need 32GB (unless the LLM is the only piece of software you’ll have opened). Even the Surface Laptop 7 that I use to test local LLMs, which has 32GB of RAM, can run out of available memory if I have a video editing app or several dozen browser tabs open while the AI model is active.

Problem #3: Local LLMs are awfully slow

Configuring a Windows laptop with more RAM might seem like an easy (though expensive) solution to Problem #2. If you do that, however, you’ll run straight into another issue: modern Windows laptops lack the compute performance required by LLMs.

I experienced this problem with the HP Elitebook X G1a, a speedy laptop with an AMD Ryzen AI processor that includes capable integrated graphics and an integrated neural processing unit. It also has 64GB of RAM, so I was able to load Llama 3.3 with 70 billion parameters (which eats up about 40GB of memory).

Matt Smith / Foundry

Yet even with that much memory, Llama 3.3-70B still wasn’t usable. Sure, I could technically load it, but it could only output 1.68 tokens per second. (It takes about 1 to 3 tokens per word in a text reply, so even a short reply can take a minute or more to generate.)

More powerful hardware could certainly help, but it’s not a simple solution. There’s currently no universal API that can run all LLMs on all hardware, so it’s often not possible to properly tap into all the compute resources available on a laptop.

Problem #4: LM Studio, Ollama, GPT4All are no match for ChatGPT

Everything I’ve complained about up to this point could theoretically be improved with hardware and APIs that make it easier for LLMs to utilize a laptop’s compute resources. But even if all that were to fall into place, you’d still have to wrestle with the unintuitive software.

By software, I mean the interface used to communicate with these LLMs. Many options exist, including LM Studio, Ollama, and GPT4All. They’re free and impressive—GPT4All is surprisingly easy—but they just aren’t as capable or easy-to-use as ChatGPT, Anthropic, and other leaders.

Matt Smith / Foundry

Plus, local LLMs are less likely to be multimodal, meaning most of them can’t work with images or audio. Most LLM interfaces support some form of RAG to let you “talk” with documents, but context windows tend to be small and document support is often limited. Local LLMs also lack the cutting-edge features of larger online-only LLMs, like OpenAI’s Advanced Voice Mode and Claude’s Artifacts.

I’m not trying to throw shade at local LLM software. The leading options are rather good, plus they’re free. But the honest truth is that it’s hard for free software to keep up with rich tech giants—and it shows.

Solutions are coming, but it’ll be a long time before they get here

The biggest problem of all is that there’s currently no way to solve any of the above problems.

RAM is going to be an issue for a while. As of this writing, the most powerful Windows laptops top out at 128GB of RAM. Meanwhile, Apple just released the M3 Ultra, which can support up to 512GB of unified memory (but you’ll pay at least $9,499 to snag it).

Compute performance faces bottlenecks, too. A laptop with an RTX 4090 (soon to be superseded by the RTX 5090) might look like the best option for running an LLM—and maybe it is—but you still have to load the LLM into the GPU’s memory. An RTX 5090 will offer 24GB of GDDR7 memory, which is relatively a lot but still limited and only able to support AI models up to around 32 billion parameters (like QwQ 32B).

Even if you ignore the hardware limitations, it’s unclear if software for running locally hosted LLMs will keep up with cloud-based subscription services. (Paid software for running local LLMs is a thing but, as far as I’m aware, only in the enterprise market.) For local LLMs to catch up with their cloud siblings, we’ll need software that’s easy to use and frequently updated with features close to what cloud services provide.

These problems will probably be fixed with time. But if you’re thinking about trying a local LLM on your laptop right now, don’t bother. It’s fun and novel but far from productive. I still recommend sticking with online-only models like GPT-4.5 and Claude 3.7 Sonnet for now.

Further reading: I paid $200/mo for ChatGPT Pro so you don’t have to

via PCWorld https://www.pcworld.com

March 18, 2025 at 05:40AM

Amazon Will Listen to All Your Voice Recordings If You Use Alexa+

https://gizmodo.com/amazon-will-listen-to-all-your-voice-recordings-if-you-use-alexa-2000576755

Amazon’s AI-enhanced Alexa assistant is going to need all your voice recordings, and there’s nothing you can do about it. An email sent to Alexa users notes the online retail giant is ending one of its few privacy provisions about recorded voice data in the lead up to Alexa+. The only way to make sure Amazon doesn’t get a hold of any of your vocals may be to quit using Alexa entirely. Gizmodo reached out to Amazon for confirmation, though we did not immediately hear back.

You can find the full email on Reddit (as first reported by Ars Technica), which plainly states the “Do Not Send Voice Recordings” setting on Alexa is being discontinued on March 28. Anybody who has the setting enabled will have it automatically revoked, and Amazon will then be able to process your voice recordings. Amazon claims it will delete the recordings once it’s done processing your request.

“As we continue to expand Alexa’s capabilities with generative AI features that rely on the processing power of Amazon’s secure cloud, we have decided to no longer support this feature,” the email reads. “If you do not take action, your Alexa Settings will automatically be updated to ‘Don’t save recordings.’ This means that, starting on March 28, your voice recording will be sent to and processed in the cloud, and they will be deleted after Alexa processes your requests. any previously saved voice recordings will also be deleted.”

Alexa+, Amazon’s upcoming AI version of its normally inconsistent voice assistant, is supposed to allow for far more utility than it had in the past. The new assistant should be able to order groceries via multiple apps including Amazon Fresh and Instacart for you based on broad requests like “get me all the ingredients I need to make a pizza at home.” It’s supposed to set smart home routines, access your security footage, and look for Prime Video content in a conversational manner. The other big headline feature is Voice ID, where Amazon claims Alexa can identify who is speaking to it. The AI theoretically should  learn users’ habits over time and tailor its responses to each individual.

Alexa+ is supposed to come to all current Echo Show devices and will supposedly make its way to future Echo products as well. If you have an Amazon Prime account, you’ll get immediate access to Alexa+. Without the subscription, you’ll need to cough up another $20 a month for the sake of talking to AI-infused Alexa. The tradeoff is now you will have to offer your vocals to the online retail giant for it to do as it pleases.

There are more than a few reasons you don’t want Amazon anywhere near your voice data. For years, Amazon’s default setting gave workers access to user data, even offering some the ability to listen to users’ Alexa recordings. In 2023, the company paid out $25 million to the Federal Trade Commission over allegations it gave employees access to children’ s voice data and Ring camera footage. For its part, Amazon said it had changed its data practices to comply with the Children’s Online Privacy Protection Act, aka COPPA. Amazon’s privacy track record is spotty, at best. The company has long been obsessed with users’ voice data. In 2023, Amazon revealed it was using Alexa voice recordings to help train its AI. Gizmodo reached out to Amazon to confirm whether Alexa+ voice recordings will also be used to train the company’s AI models. We will update this story once we hear back.

Unlike Apple, which made big claims about data protections with its “private cloud compute” system for processing cloud-based AI requests anonymously, Amazon has made far fewer overtures to keeping user data safe. Smaller AI models can run on-device, but those few examples we have of on-device capabilities from the likes of Windows Copilot+ laptops or Gemini on Samsung Galaxy S25 phones are—in their current iteration—little more than gimmicks. Alexa+ wants to be the first instance of true assistant AI with cross-app capabilities, but it may also prove a privacy nightmare from a company that has  routinely failed to protect users’ data.

via Gizmodo https://gizmodo.com/

March 17, 2025 at 10:27AM

Why new tech only feels good for a short time

https://www.popsci.com/health/why-new-tech-only-feels-good-for-a-short-time/

A friend recently sent me a video about getting Red Dead Redemption 2 running on an old CRT television by YouTuber Any Austin, which I obviously watched because I love gimmicky tech videos involving obsolete things. I was expecting to laugh at something mixing retro and current technology, and that happened, but then the video wandered into human psychology. 

I thought it would be ridiculous to play a modern game on such an old TV, mostly because it is. But after playing for a little bit he realized that, once you get used to it, playing a modern game on a TV that’s been obsolete for decades just…doesn’t feel that different. Sure, there were annoyances—certain things were cropped off the screen—but for the most part the game was just as immersive and fun on an ancient TV as on a contemporary one. 

“The human brain is just really good at normalizing basically anything that isn’t directly causing us to die,” Any Austin explains in the video. “Your brand new PC is probably giving you about the same amount of joy as your old PC. Your great fancy new job probably feels just about as soul sucking as your old job, provided you control for other factors like money.” 

That…can’t be how human brains work. Can it? I decided to look into the psychology. (Spoiler: It’s exactly how human brains work.)

The Hedonic Treadmill

The psychological phenomenon known as the hedonic treadmill has been well documented since at least the 1970s. The concept refers to how humans tend to revert to a baseline level of happiness after positive or negative changes in their lives. There may be a spike in happiness after a wedding, a promotion at work, or buying a new TV, but that is temporary—people tend to eventually revert to their previous levels of happiness. The same thing is true about negative life changes. 

An early study showing this, published in the Journal of Personality and Social Psychology in 1978, examined the relative happiness of three groups: lottery winners, people who went through serious automobile accidents, and a control group. The lottery winners’ results were surprising: 

Lottery winners and controls were not significantly different in their ratings of how happy they were now, how happy they were before winning (or, for controls, how happy they were 6 months ago), and how happy they expected to be in a couple of years. 

Now, there was nuance in the study. The victims of car accidents did not adapt to the same extent, though the study notes that “the accident victims did not appear nearly as unhappy as might have been expected.” Even so, the hedonic treadmill has been replicated in study after study over the years. Positive and negative changes alike tend to have a big impact on our levels of happiness in the short term but over time, we revert back to our base levels of happiness. 

What does this have to do with playing Red Dead Redemption on an ancient TV? The same psychological tendency is in play. If you bought the TV of your dreams tomorrow there could be a honeymoon period during which you feel that it is making your video game experience better, and that could make you happier. 

After that period, though, you’ll get right back to the same level of satisfaction as before. Eventually maybe you hear about a newer, better TV, which you now want to buy in order to get that same happiness boost you got from buying the last one. That’s why this is called a treadmill: you think the next purchase will permanently boost your happiness only to end up right back where you started. 

How to Get Off the Treadmill

Knowing this, how can we get more satisfaction out of our gadgets? The answer might be spending more time thinking about how much you enjoy the things you already have. A 2011 paper by Kennon M. Sheldon and Sonja Lyubomirsky published in the Personality and Social Psychology Bulletin, showed that regularly thinking about the positive changes in your life—and thinking less about hypothetical future changes—can help maintain the increase in happiness. From the conclusion: 

In other words, because of the very adaptation processes examined in the current research, the appeal of the new car, house, or handbag that initially brought pleasure begins to fade, such that people are soon tempted to buy an even better car, house, or handbag, trying to regain the initial exhilaration that has gone missing. However, in a world of expanding debt, declining resources, and questionable sustainability, it seems imperative to arrest or minimize this process, so that people can learn to be content with less. Our study suggests that this is an attainable goal, realizable when people make efforts to be grateful for what they have and to continue to interact with it in diverse, surprising, and creative ways.

The specifics of appreciating changes in creative ways aren’t laid out, but I think Any Austin’s video ends with a pretty good one: occasionally switching out your current tech for something ancient, then switching back to modern tech. 

Hear me out on this: Here’s what you should do. Buy two TVs: a small 720p one and then a bigger 1080p one. Anytime you get the hankering for something new you just switch back and forth between them. Going from the big one to the small one will feel cute and novel and cozy, and then going from the small one to the big one we feel like this huge immersive upgrade. 

I am far from a psychology expert, and I think Any Austin would admit the same thing. Given the hedonistic upgrade, though, this doesn’t sound like the worst idea—you could, in theory, give yourself that little happiness boost from trying something new on a regular basis. You’re tricking yourself into appreciating the thing you already have instead of dwelling on how much better life would be if you had something even better. 

You don’t have to go to this extreme, though. Just know that the research suggests you’ll be happier with your tech if you spend more time appreciating what you have and less time dreaming about what you could buy instead.

The post Why new tech only feels good for a short time appeared first on Popular Science.

via Popular Science – New Technology, Science News, The Future Now https://www.popsci.com

March 17, 2025 at 08:58AM

Good Oral Hygiene Can Prevent Other Overall Health Issues, Even Dementia

https://www.discovermagazine.com/health/good-oral-hygiene-can-prevent-other-overall-health-issues-even-dementia

When it comes to good overall health, the teeth are often overlooked. We tend to think of teeth as primarily cosmetic when in reality, oral health is linked to physical health and the risk of a number of chronic conditions. The mouth is a gateway to the rest of the organs, and when it becomes diseased, so too can the body.

“Your mouth is the primary way bacteria enters your body. Bacteria can travel from areas like infected gums through the bloodstream to other parts of your body,” says Shashwat Patel of Hamilton Dental in Hamilton, Ontario. “Decaying teeth and gum disease also adversely impact nutrition because when people can’t chew well that affects overall health.”

Research has also shown that poor oral health is linked to a laundry list of chronic conditions, including diabetes, cardiovascular disease, infections, and poor quality of life.

The Link Between Poor Oral Health and Diabetes

An August 2024 study published in The American Journal of Medicine found that periodontal disease, an infection of the tissue that supports the teeth, was a strong risk factor for diabetes. It’s likely the result of long-term inflammation in the body, says Frank A. Scannapieco, study author and professor of oral biology at the University of Buffalo School of Dental Medicine.

“Long-term chronic periodontitis can contribute to systemic inflammation, which could contribute to diabetes,” says Scannapieco. Over time, inflammation in the body can contribute to insulin resistance as well.

Gum disease can also raise your blood sugar, according to research published in The Journal of the American Dental Association. Research found that compared to patients with healthy gums, people with severe gum disease had higher blood sugar, leading to an increased risk of type 2 diabetes.


Read More: Regrowing Teeth Is on the Horizon and May Represent the Future of Dentistry


Poor Oral Health and Other Chronic Diseases

The mouth is also linked to other chronic diseases, such as respiratory infections like pneumonia and infections of the air sacs in the lungs. The mouth is connected directly to the lungs through the trachea, so any kind of aspiration into the lower airway could be linked to an infection. The problem is made worse in patients who are intubated and need assistance breathing, which can cause microbes from the mouth to enter the lower airway, says Scannapieco.

Alzheimer’s Disease and dementia are also linked to inflammation that may be at least partially impacted by poor oral health. Chronic inflammation can lead to neural inflammation in the brain, which can cause cognitive decline.

It’s also possible that bacteria could cross the blood-brain barrier, and some of the microbes that originated in the oral cavity might increase the risk of developing these diseases. There’s evidence of oral bacteria being detected in the brains of animal subjects who had also shown signs of Alzheimer’s disease and dementia.

“We’ve seen some interesting clues that oral bacteria could influence the course of a disease like Alzheimer’s. And since gum disease is so widespread, it’s possible that this could be a contributor to dementia, though the research is still very preliminary,” says Scannapieco.

Practicing Good Oral Health

The good news is that maintaining good oral health is quite simple, starting by brushing your teeth twice daily for a least two minutes when you wake up in the morning and again when you go to bed at night. Additionally, remove plaque from between the teeth by flossing daily. Visit a dentist every six months for a professional teeth cleaning and to have your teeth checked for cavities. Finally, eat a balanced diet that doesn’t contain excessive sugary drinks or foods.

While oral health may turn out to have an outsized impact on your overall health, at least protecting your teeth is no big deal.

This article is not offering medical advice and should be used for informational purposes only.


Read More: The Strongest and Weirdest Teeth Seen in the Animal Kingdom


Article Sources

Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:


Sara Novak is a science journalist based in South Carolina. In addition to writing for Discover, her work appears in Scientific American, Popular Science, New Scientist, Sierra Magazine, Astronomy Magazine, and many more. She graduated with a bachelor’s degree in Journalism from the Grady School of Journalism at the University of Georgia. She’s also a candidate for a master’s degree in science writing from Johns Hopkins University, (expected graduation 2023).

via Discover Main Feed https://ift.tt/ufAWVTw

March 14, 2025 at 08:15AM

This nightmarish $35K computer is powered by a lab-grown human brain

https://www.pcworld.com/article/2634911/this-nightmarish-35k-computer-is-powered-by-a-lab-grown-human-brain.html

An Australian company called Cortical Labs has developed a computer powered by lab-grown human brain cells, Gizmodo reports.

The computer, known as CL1, is described as the world’s first “code deployable biological computer” and is now available for pre-order — for a price in the $35,000 range. Don’t want to buy your own device? The company also offers “Wetware-as-a-Service” via which you can rent bio-computer processing power via the cloud.

CL1 consists of lab-grown neurons grown on a glass-and-metal electrode array. They’re connected to 59 electrodes, creating a stable neural network. The system is encased in a life support unit that keeps the neurons alive by mimicking the body’s organ functions, including heart pumping, kidney-like waste filtration, and gas mixing of oxygen, carbon dioxide, and nitrogen.

According to Cortical Labs, the neurons are placed in a nutrient solution and receive their information from the company’s Biological Intelligence Operating System (biOS), which creates a simulated world in which the neurons receive sensory input and produce responses that affect the environment. CL1 is designed as a high-performance closed loop, where neurons interact with software in real time. The system can stay alive for up to six months and is compatible with USB devices.

Cortical Labs demonstrated an early version of the technology by teaching the system to play Pong. They claim that biological computers can rival or surpass digital AI systems, especially when it comes to understanding the basic mechanisms of intelligence.

According to the company’s Chief Scientific Officer, Brett Kagan, a network of 120 CL1 devices could give researchers insight into how genes and proteins affect learning. The technology can also be used in drug development and disease modeling by simulating neurological processes at the molecular level.

via PCWorld https://www.pcworld.com

March 12, 2025 at 10:47AM