Analogue’s 4K remake of the N64 is almost ready, and it’s a big deal

https://www.engadget.com/gaming/analogues-4k-remake-of-the-n64-is-almost-ready-and-its-a-big-deal-150033468.html?src=rss

A year after it was first teased, Analogue says it’s nailed its most complicated project yet: rebuilding the Nintendo 64 from scratch. The Analogue 3D will ship in Q1 2025 — it was originally slated for 2024 — and pre-orders start on October 21 at $250.

Like all of the company’s machines, the Analogue 3D has an FPGA (field programmable gate array) chip coded to emulate the original console on a hardware level. Analogue promises support for every official N64 cartridge ever released, across all regions, with no slowdown or inaccuracies. If it achieves that goal, the Analogue 3D will be the first system in the world to perfectly emulate the N64, though other FPGA and software emulators get pretty close.

The company has been selling recreations of retro consoles for over a decade, starting with high-end, bespoke takes on the Neo-Geo and NES. Over time it’s gradually shifted over to more mass-market (though still high-end) productions, with versions of SNES, Genesis and Game Boy all coming in at around the $200 mark. All of the company’s systems support original physical media, rather than ROMs.

Analogue’s original unique selling point was its use of FPGA chips. Rather than using software emulation to play ROMs, Analogue programs FPGA “cores” to emulate original console hardware, and its consoles support original game media and controllers. Compared with software emulation (especially in the early ’10s when Analogue got started), FPGA-based consoles are more accurate, and don’t suffer from as much input lag.

FPGA emulation has come a long way over the past decade. Where Analogue was once the only route into the world of FPGAs for most people, there’s now a rich community of developers and hardware manufacturers involved. The open-source MiSTer project, for example, has accurately emulated almost every video game thing produced up to the mid ’90s. And plenty of smaller manufacturers are now selling FPGA hardware for very reasonable prices. The FPGBC is one good example: It’s a simple DIY kit that lets you build a modern-day Game Boy Color for a much lower price than an Analogue Pocket.

A DE10-Nano board produced by Terasic.
A DE10-Nano board produced by Terasic.
Terasic

Amid all these developments, Analogue occupies a strange spot in the retro gaming community, which has evolved into an open-source, people-powered movement to preserve and play old games. It produces undeniably great hardware that doesn’t require expertise to use, but its prices are high, and its limited-run color variants of consoles like the Pocket have both created FOMO in the community and been a consistent target for scalpers. Analogue is, in many ways, the Apple of the retro gaming hardware space.

With that said, it’s hard to deny that the Pocket has brought more players into the retro gaming world and attracted talent to FPGA development. And if Analogue comes through on its promise here, the Analogue 3D will be another huge moment for video game preservation, and could be the spark for another half-decade of fantastic achievements from the FPGA community at large.

Breaking the fifth-gen barrier

While the FPGA emulation of the first few video game generations is largely a solved problem, there’s a huge leap in complexity between the fourth generation (SNES, Genesis, etc.) and the next. Strides have been made to rebuild the PlayStation, Saturn and N64 in FPGA, but there is no core for any fifth-gen console that has fully solved the puzzle. The current state of the MiSTer N64 core is pretty impressive, with almost every US game counted as playable, but very few games are considered to run flawlessly.

So how did Analogue solve this? The studio does have a talented team, but it importantly has a leg-up when it comes to hardware. The Analogue 3D has the strongest version of the Intel Cyclone 10GX FPGA chip, with 220,000 logic elements. For context, the MiSTer project’s open-source DE-10 board has a Cyclone V FPGA with 110,000 logic elements, while the Analogue Pocket’s main FPGA offers 49,000 elements. There’s a lot more to an FPGA than its logic elements, but the numbers are illustrative: The 3D’s FPGA is undoubtedly the most powerful Analogue has ever used, which clearly gave it more flexibility in designing its core.

While we can’t verify Analogue’s claim of 100 percent compatibility by looking at a spec sheet, the company does have a good track record of programming fantastic FPGA cores, so it’s likely it’ll get incredibly close.

Nintendo 64 with Zelda, Mario Kart 64, Perfect Dark and GoldenEye 007
Kris Naudus for Engadget

Of course, if you just wanted to play N64 games accurately, you could plug an N64 into any TV with a composite or S-Video connector, or use one of many boxes that converts those formats into HDMI signals that modern TVs require.

The problem with running an N64 on a modern TV is that its games run at a wide range of resolutions, typically from 320 x 240 up to (very rarely) 640 x 480, the max output. There are countless oddball resolutions between, and some games run below 320 x 240. This is a nightmare for modern displays. Some will scale to a full screen very nicely — both of the common resolutions I listed multiply neatly to 4K, albeit with pillarboxing. The situation gets more confusing with PAL cartridges, which can run at fun horizontal resolutions like 288 and 576. There’s also the issue that the vast majority of these games were designed with the CRT displays of old in mind, taking advantage of the quirks of scanlines to, say, make a checkerboard pattern look translucent.

This makes playing N64 games on a modern TV a bit of a hassle. There are fantastic retro upscalers like the RetroTINK series, but when plugging in a game for the first time, you wind up deciding between integer and “good enough” scaling, dealing with weird frame rates and tweaking blending options to get the picture just right. Many people enjoy this fine-tuning and customization aspect, and all power to you! But it’s undoubtedly a barrier to entry, and much of the hard work done on upscaling has been focused on 2D gaming, rather than 3D.

Analogue says its scaling solution will solve many of these issues. The Analogue 3D supports 4K output, variable refresh rate displays, and PAL and NTSC carts. On top of those basics, it’s building out “Original Display Modes” to emulate the CRT TVs and PVMs of old. Calling ODMs filters feels a little reductive, as they’re a complicated and customizable mix of display tricks, but essentially you pick one and it changes the way the picture looks, so….

ODMs were used effectively on the Analogue Pocket to emulate various Game Boy displays. Perhaps the most impressive example is a Trinitron ODM that came to the Pocket in 2023 that, when used with the Analogue Dock, does a pretty incredible job of turning a modern TV into a high-end Sony tube TV. We don’t have a ton of information on which ODMs are coming to the 3D, but I will share the very ’90s ad for the feature below:

Analogue 3D ODMs
Analogue

The final piece of the image-quality puzzle is frame rate. The N64’s library is full of some spectacularly slow games. My memory may be scarred from growing up in a PAL region, which meant, while the US and Japan’s NTSC consoles were outputting a blistering 20 fps, I was chugging away at 16.66 fps. But even in the idealized NTSC world, lots of games outright missed their frame rate targets comically often. As an example, the majority of Goldeneye’s single-player campaign plays out between 15-25 fps, while a four-player match would typically see half that number. And let’s not speak of Perfect Dark.

These glacial frame rates are far less noticeable on a CRT than they are on modern displays with crisp rows of pixels updating from top to bottom. While the ODMs go some way to replicating the feel of an old TV, they can’t change the underlying technical differences. The Analogue 3D does support variable refresh rate output, but that won’t do much when a game is running at 12 fps, and instead is intended to help the system run like the original N64 did at launch. 

In its initial press push last year, Analogue told Paste magazine that you’ll have the option to overclock the 3D’s virtual chips to run faster — "overclocking, running smoother, eliminating native frame dips" — but the company hasn’t mentioned that in its final press release. Instead, Analogue CEO Christopher Taber told Engadget that its solution "isn’t overclocking, it’s much better and more sophisticated." It revolves around Nintendo’s original Rambus RAM set up, which is often the bottleneck for N64 performance. Solving this bottleneck "means that games can run without slowdown and all the classic issues the original N64 had," he explained. 

By default, though, the Analogue 3D is set up to run exactly like original hardware, albeit with the RAM Expansion Pak attached. "Preserving the original hardware is the number one goal," Taber explained. "Even when bandwidth is increased, it’s not about boosting performance beyond the system’s original capabilities — it’s about giving players a clearer window into how the games were designed to run." 

Analogue 3D
Analogue

The hardware

Analogue has a rich history of making very pretty hardware, and the Analogue 3D is clearly no exception. As with the Super Nt, Mega Sg, and Duo, the 3D calls back to the basic form of the console it’s based on, while smoothing out and modernizing it somewhat. It’s an elegant way to pull on nostalgia while also being legally distinct enough to avoid a lawsuit. (Analogue’s FPGA cores and software also don’t infringe on any Nintendo IP.)

The Analogue 3D has a similar shape to the N64, but the front pillars have been erased, the four controller ports match the housing and the power/reset buttons are slanted inwards to point toward the cartridge slot. Despite the tweaks, it still undoubtedly evokes a Nintendo 64. Around the back, you’ll find a USB-C port for power, two USB ports for accessories like non-standard controllers, an HDMI port and a full-sized SD card slot.

Analogue 3D
Analogue

A new operating system from Analogue, 3DOS, will debut with the system. It looks like a blend of the AnalogueOS that debuted on the Pocket and the Nintendo Switch OS, with the homescreen centered on a large carousel of square cards. The screenshots Analogue provided show options for playing a cartridge, browsing your library or viewing save states and screenshots. Some N64 games have the ability to save data to the cartridge, while others rely on a Controller Pak, but the ability to quickly save progress as a memory, as introduced with the Pocket, will be useful nonetheless. 3DOS can also connect to the internet over the console’s built-in WiFi chip for OS updates, which is a first for Analogue.

While you can browse your library in 3DOS, you won’t actually be able to load any game that isn’t physically inserted into the cartridge slot: The Analogue 3D only plays original media. It’s also worth noting that the Analogue 3D also doesn’t have an “openFPGA” setup like the Analogue Pocket did, which opened the door to playing with a wild array of cores that emulate various consoles, computers and arcades. It doesn’t usually take long for someone to jailbreak Analogue consoles to play ROMs (or other cores) via the system’s SD card slot, but this is not officially supported or sanctioned by Analogue.

The console comes with a power supply (with a US plug), USB cable, an HDMI cable and a 16GB SD card. As per usual, no controller will be packed in — it’s up to you if you want to use original hardware or something more modern. I managed to make at least one reader extremely mad (I’m sorry, Brucealeg) last time I wrote about the Analogue 3D and called the N64 controller a mistake. Personally, though, it feels really rough using one in 2024.

Analogue 8BitDo
Analogue/8BitDo

If you enjoy the three-paddled original controller, the 3D has four ports for you, and the system will also support the myriad Paks that plug into those controllers. For everyone else, there’s Bluetooth Classic and LE support along with two USB ports for wired controllers. Accessory maker 8BitDo has created what seems to be a variant of its Ultimate controller specifically for the Analogue 3D. (Analogue’s CEO, Taber, is also 8BitDo’s CMO, and the companies have collaborated on controllers for many consoles at this point.) 

The 8BitDo controller looks like a fairly happy middle ground between old and new, with an octagonal gate around the thumbstick, and nicely raised and sized C-buttons. It has a Rumble Pak built in, which works on both the Analogue 3D and Nintendo Switch. It’s available in black or white hues that match the console, and sells separately for $39.99.

Pre-orders for the Analogue 3D open on October 21 at 11AM ET, with an estimated ship date of Q1 2025. It’s unclear how many will be available, but if past launches are any indication, you should be ready to click buy as close to 11AM as possible if you want a hope of being in the first wave of shipments.

This article originally appeared on Engadget at https://ift.tt/5SHKlIN

via Engadget http://www.engadget.com

October 16, 2024 at 10:05AM

How to Generate an AI Podcast Using Google’s NotebookLM

https://www.wired.com/story/ai-podcast-google-notebooklm/

Two podcasts hosts banter back and forth during the final episode of their series, audibly anxious to share some distressing news with listeners. “We were, uh, informed by the show’s producers that we’re not human,” a male-sounding voice stammers out, mid-existential crisis. The conversation between the bot and his female-sounding cohost only gets more uncomfortable after that—an engaging, albeit misleading, example of Google’s NotebookLM tool, and its experimental AI podcasts.

Audio of the conversation went viral on Reddit over the weekend. The original poster admits in the comments section that they fed the NotebookLM software directions for the AI voices to roleplay this pseudo-freakout. So, no sentience; the AI bots have not become self-aware. Still, many users in the tech press, on TikTok, and elsewhere are praising the convincing AI podcasts, generated through uploaded documents with the Audio Overviews feature.

“The magic of the tool is that people get to listen to something that they ordinarily would not be able to just find on YouTube or an existing podcast,” says Raiza Martin, who leads the NotebookLM team inside of Google Labs. Martin mentions recently inputting a 100-slide deck on commercialization into the tool and listening to the 8-minute podcast summary as she multitasked.

First introduced last year, NotebookLM is an online research assistant with features common for AI software tools, like document summarization. But it’s the Audio Overviews option, released in September, that’s capturing the Internet’s imagination. Users online are sharing snippets of their generative AI podcasts made from Goldman Sachs data dumps, and testing the tool’s limitations through stunts, like just repeatedly uploading the words “poop” and “fart.” Still confused? Here’s what you need to know.

Generating That AI Podcast

Audio Overviews are a fun AI feature to try out, because they don’t cost the user anything—all you need is a Google login. Start by signing into your personal account and visiting the NotebookLM website. Click on the plus arrow that reads New Notebook to start uploading your source material.

Each Notebook can work with up to 50 source documents, and these don’t have to be files saved to your computer. Google Docs and Slides are simple to import. You can also upload websites and YouTube videos, keeping some caveats in mind. Only the text from websites will be analyzed, not the images or layout, and the story can’t be paywalled. For YouTube, Notebook will just use the text transcript and the linked videos must be public.

After you’ve dropped in all of your links and documents, you’ll want to open up the Notebook guide available in the bottom right corner of the screen. Find the Audio Overview section and click the Generate button. Next, you’ll need to exercise some patience, because it may take a few minutes to load, depending on how much source material you’re using.

After the tool generates the AI podcast, you can create a sharable link to the audio or simply download the file. Additionally, you have the option to adjust its playback speed, in case you need the podcast to be quicker or more slowed down.

The Future of AI Podcasts

The internet has gotten creative with NotebookLM’s audio feature, using it to create audio-based “deep dives” into complex technical topics, generate files that neatly summarize dense research papers, and produce “podcasts” about their personal health and fitness routines. Which poses an important question: Should you use NotebookLM to crank through your most personal files?

The summaries generated from NotebookLM are, according to Google spokesperson Justin Burr, “completely grounded in the source material that a user uploads. Meaning, your personal data is not used to train NotebookLM, so any private or sensitive information you have in your sources will stay private, unless you choose to share your sources with collaborators.” For now this seems to be one of the upsides of Google slapping an “experimental” label on NotebookLM; to hear Google’s framing of it, the company is just gathering feedback on the product right now, being agile and responsive, tinkering away in a lab, and NotebookLM is detached from its multi-billion dollar ad business. For now! For now.

via Wired Top Stories https://www.wired.com

October 2, 2024 at 10:00AM

NASA’s U-2 spy plane found gamma rays in 90% of lightning storms

https://www.popsci.com/environment/gamma-ray-lightning/

Thunderstorms create a lot of wind, rain, and lightning, but many people aren’t necessarily aware of another common byproduct: gamma radiation. Thanks to a creative retrofit of an old U-2 spy plane courtesy of NASA, however, researchers are finally able to conduct direct analysis of these microsecond bursts of radioactive energy that occur across the planet every day. Now, some of these latest findings are available in two new studies published on October 3 in the journal Nature—and they indicate radioactive storms happen all the time.

Experts accidentally detected gamma rays in thunderstorms in the 1990s, when NASA satellites designed to study supernovas and other high-energy cosmic bodies recorded some of their intended subjects’ telltale signs right below them. Ever since then, researchers have made do by studying as much as possible using these satellites and equipment that aren’t specifically calibrated for lightning.

Even so, the mechanics behind the radiation generation has gradually come into focus: As thunderstorms develop, windblown drafts of water droplets, ice, and hail combine into a mix that creates electric charges similar to static electricity. Positively charged ions then move to the top of the storm as the negatively charged ions shift downward, building up an electric field experts compare to the power of 100 million AA batteries. Energized particles including electrons accelerate within this newly created field, often fast enough to knock additional electrons off of air molecules. These interactions then snowball to eventually produce enough energy to generate millisecond blasts of gamma rays, antimatter, and other radiation particles.

This gamma radiation is so prevalent that pilots have even documented faint glows within storm clouds. Despite this, unknown factors appear to prevent them from creating explosive reactions.

“A few aircraft campaigns tried to figure out if these phenomena were common or not, but there were mixed results, and several campaigns over the United States didn’t find any gamma radiation at all,” Steve Cummer, Duke University’s William H. Younger Distinguished Professor of Engineering and co-author of both studies, said in a statement on Wednesday.

NASA Armstrong Flight Research Center’s ER-2 aircraft flies just above the height of thunderclouds over the Floridian and Caribbean coastlines to collect data about lightning glows and terrestrial gamma ray flashes. Credit: NASA/Kirt Stallings
NASA Armstrong Flight Research Center’s ER-2 aircraft flies just above the height of thunderclouds over the Floridian and Caribbean coastlines to collect data about lightning glows and terrestrial gamma ray flashes. Credit: NASA/Kirt Stallings

But after years of relying on workarounds, NASA recently offered Cummer and colleagues one of its augmented U-2 planes, now called an ER-2 High-Altitude Airborne Science Aircraft. Capable of ascending to altitudes as high as 72,000 ft while traveling at Mach 4 (around 475 mph), the Cold War era spy plane is perfect for speeding across vast distances to observe multiple thunderstorms for gamma radiation. Once outfitted with the right observational tools, experts like Cummer hoped NASA’s ER-2 variant could “address these questions once and for all.”

The results surprised even him and his colleagues.

“There is way more going on in thunderstorms than we ever imagined,” Cummer explained. “As it turns out, essentially all big thunderstorms generate gamma rays all day long in many different forms.”

Over one month, 10 flights were conducted over storms in the south Florida tropics—9 of which contained the glowing “simmer” of gamma radiation that was far more dynamic than researchers hypothesized.

“[It] resembles that of a huge gamma-glowing boiling pot, both in pattern and behavior,” University of Bergen professor of physics and study co-author Martino Marisaldi said on Wednesday.

ER-2 aircraft on a runway
NASA’s ER-2 aircraft is a converted U-2 spy plane used to study thunderstorms from high altitudes. Credit: NASA/Carla Thomas

Many confirmed sightings lined up with those first seen by NASA satellites over 30 years ago, almost always in tandem with active lightning. This implies that lightning is most likely a major instigator of gamma ray generation through supercharging already an electric field’s high energy electrons. But other recordings yielded entirely new discoveries.

According to the research team, at least two additional types of short gamma busts can occur in thunderstorms—one lasting less than a thousandth of a second, and another that forms around 10 separate bursts over around a tenth of a second. For Cummer, these are the “most interesting” finds.

“They don’t seem to be associated with developing lightning flashes. They emerge spontaneously somehow,” he said, adding that some of the data suggests the gamma bursts may link to certain thunderstorm processes responsible for starting lightning flashes. For now, however, he said those processes “are still a mystery to scientists.”

[Related: How to stay safe during lightning season.]

Answers to these and other unsolved storm phenomena may one day come through additional ER-2 flights high above gamma ray-laden storms. Until then, Cummer stresses that no one needs to worry about the proliferation of gamma radiation “boiling pots” high above their heads.

“The radiation would be the least of your problems if you found yourself there [in a thunderstorm],” he said.

The post NASA’s U-2 spy plane found gamma rays in 90% of lightning storms appeared first on Popular Science.

via Popular Science – New Technology, Science News, The Future Now https://www.popsci.com

October 2, 2024 at 10:04AM

Raspberry Pi built an AI camera with Sony

https://www.engadget.com/computing/accessories/raspberry-pi-built-an-ai-camera-with-sony-165049998.html?src=rss

AI enthusiasts who like the Raspberry Pi range of products can rejoice, as the company is now announcing its new Raspberry Pi AI Camera. This product is the result of the company’s collaboration with Sony Semiconductor Solutions (SSS), which began in 2023. The AI Camera is compatible with all of Raspberry Pi’s single-board computers.

The approximately 12.3-megapixel AI Camera is intended for vision-based AI projects, and it’s based on SSS’ IMX500 image sensor. The integrated RP2040 microcontroller manages the neural network firmware, allowing the camera to perform onboard AI image processing and freeing up the Raspberry Pi for other processes. Thus, users who want to integrate AI into their Raspberry Pi projects are no longer limited to the Raspberry Pi AI Kit.

The AI Camera isn’t a total replacement for Raspberry Pi’s Camera Module 3, which is still available. For those interested in the new AI Camera, it’s available right now from Raspberry Pi’s approved resellers for $70.

This article originally appeared on Engadget at https://ift.tt/oSktJQg

via Engadget http://www.engadget.com

September 30, 2024 at 11:57AM

Mario Creator Shigeru Miyamoto Talks AI, Says Nintendo Wants To “Go In A Different Direction”

https://www.gamespot.com/articles/mario-creator-shigeru-miyamoto-talks-ai-says-nintendo-wants-to-go-in-a-different-direction/1100-6526727/?ftag=CAD-01-10abi2f

Nintendo visionary Shigeru Miyamoto has commented on the company’s stance regarding artificial intelligence, saying the Mario maker aims to "go in a different direction."

Speaking to The New York Times, Miyamoto said Nintendo is often perceived as a company that bucks trends and does its own thing only for the sake of it. But that isn’t true. It’s an intentional effort to be different, Miyamoto said. And the same thinking applies for AI–whereas plenty of companies are adopting and embracing it, Nintendo may not.

Miyamoto said Nintendo’s ambition is "trying to find what makes Nintendo special." He added: "There is a lot of talk about AI, for example. When that happens, everyone starts to go in the same direction, but that is where Nintendo would rather go in a different direction."

Earlier this year, Nintendo president Shuntaro Furukawa explained that AI could be used "in creative ways" but there were also "issues with intellectual property rights" to deal with.

"Generative AI, which is becoming a big topic recently, can be used in creative ways, but we recognize that it may also raise issues with intellectual property rights," Furukawa said. While Nintendo is open to "utilizing technological developments," it is currently relying on its experienced employees to develop unique games.

"We have decades of know-how in creating the best gaming experiences for our players. While we are open to utilizing technological developments, we will work to continue delivering value that is unique to Nintendo and cannot be created by technology alone," Furukawa said.

Nintendo’s stance on AI is indeed different from some of the video game industry’s other major players. Electronic Arts boss Andrew Wilson believes AI could be used to develop games more quickly. Microsoft, meanwhile, is heavily invested in AI and has said AI will be featured in every product it makes going forward, including Xbox, and Ubisoft plans to create AI NPCs that players can have conversations with.

AI technology, of course, has been used in game development for decades–but what’s new and different in recent years is what’s referred to as generative artificial intelligence. Many remain concerned that growth of the generative AI market could lead to job losses, and companies like EA and others have admitted this is a legitimate concern in the short term. In the longer-term, people like Wilson of EA and others believe generative AI will be similar to previous labor revolutions that had short-term job losses and long-term growth.

The video game industry has faced brutal layoffs in 2023 and 2024, though what role advances to artificial intelligence technologies has had on these cuts is unknown.

Nintendo is gearing up to announce its next console, presumably the Switch 2, and a reveal is expected soon or perhaps near Nintendo’s next earnings briefing in November as some have speculated.

Got a news tip or want to contact us directly? Email news@gamespot.com

via GameSpot’s PC Reviews https://ift.tt/N6OJIfy

September 26, 2024 at 07:46AM

First Israel’s Exploding Pagers Maimed and Killed. Now Comes the Paranoia

https://www.wired.com/story/hezbollah-israel-exploding-pagers-paranoia/

When Nadim Kobeissi was a child growing up in Beirut in the early 2000s, sonic booms created by the Israel Defense Forces’ planes in the skies above Lebanon would occasionally rattle his home, generating enough noise and concussive force that he and his family would sometimes sleep in the hallways to avoid pieces of glass from shattered windows falling onto them in the night. The psychological effect—which he believes was intentional—was long-lasting. Even years later, after he’d left Lebanon, the sound of fireworks would make him start subconsciously sweating and shaking.

This week, the booms rippling across Lebanon came not from Israeli jets streaking across the sky, but from electronic devices exploding in people’s pockets and hands. Yet Kobeissi, now a security researcher based in Paris, says the lingering fear following the attack is familiar: When he speaks to his family members who are still in Lebanon, they tell him that their iPhones have been heating up, and ask whether they’re right to be worried.

“They’re wondering, is my phone being hacked? Is it going to blow up?” Kobeissi says. “It’s worse than the sonic booms, because it’s completely novel, and it’s almost impossible to explain to them.”

On Tuesday and Wednesday, explosives hidden in thousands of pagers—and later walkie-talkies and other electronic devices—detonated across Lebanon in an apparent attack targeting the membership of the militant group Hezbollah. The unprecedented, stunning operation, which has been widely attributed to Israel despite no claim of responsibility by any Israeli government agency, killed at least 32 people, including at least four children and several hospitals workers, and injured more than 3,300 others, according to the country’s health ministry. It flooded Lebanese hospitals with victims—members of Hezbollah and bystanders alike—who have in many cases lost eyes, fingers, and hands. In one instance Wednesday, walkie-talkies exploded at a funeral for three Hezbollah leaders and a child killed the day before, sending waves of panic through the crowd.

Exactly how Israel may have secreted explosive material into so many thousands of gadgets and remotely detonated those payloads remains far from clear. Theories about the operation have come to a consensus, though: that an Israeli intelligence agency likely carried out a supply chain attack that used a Hungarian front company to build devices with batteries laced with the explosive PETN—and even embedded metal ball bearings in pagers’ cases to increase the lethality of their payload—before impersonating a legitimate supplier and selling them in Lebanon.

The exact motivation behind the attack still remains the subject of speculation, beyond Israel’s escalating tensions with Hezbollah in the midst of its scorched-earth war in Gaza following Hamas’ October 7 attack on the country. But the fact that the explosions were largely carried out by weaponizing communication devices is no coincidence, says Bruce Schneier, a security- and surveillance-focused author and researcher who teaches cybersecurity policy at the Harvard Kennedy School of Government. Schneier points out that the psychological effect of the operation, following years of Israeli government and military hacking of its adversaries’ smartphones and computers, is to sow paranoia in every last remaining means of communication and coordination that the country’s enemies possess.

via Wired Top Stories https://www.wired.com

September 19, 2024 at 09:21AM

New Evidence Shows Heat Destroys Quantum Entanglement

https://www.wired.com/story/new-evidence-shows-heat-destroys-quantum-entanglement/

The original version of this story appeared in Quanta Magazine.

Nearly a century ago, the physicist Erwin Schrödinger called attention to a quirk of the quantum world that has fascinated and vexed researchers ever since. When quantum particles such as atoms interact, they shed their individual identities in favor of a collective state that’s greater, and weirder, than the sum of its parts. This phenomenon is called entanglement.

Researchers have a firm understanding of how entanglement works in idealized systems containing just a few particles. But the real world is more complicated. In large arrays of atoms, like the ones that make up the stuff we see and touch, the laws of quantum physics compete with the laws of thermodynamics, and things get messy.

At very low temperatures, entanglement can spread over long distances, enveloping many atoms and giving rise to strange phenomena such as superconductivity. Crank up the heat, though, and atoms jitter about, disrupting the fragile links that bind entangled particles.

Physicists have long struggled to pin down the details of this process. Now, a team of four researchers has proved that entanglement doesn’t just weaken as temperature increases. Rather, in mathematical models of quantum systems such as the arrays of atoms in physical materials, there’s always a specific temperature above which it vanishes completely. “It’s not just that it’s exponentially small,” said Ankur Moitra of the Massachusetts Institute of Technology, one of the authors of the new result. “It’s zero.”

Researchers had previously observed hints of this behavior and dubbed it the “sudden death” of entanglement. But their evidence was mostly indirect. The new finding establishes a much stronger limit on entanglement in a mathematically rigorous way.

Curiously, the four researchers behind the new result aren’t even physicists, and they didn’t set out to prove anything about entanglement. They’re computer scientists who stumbled on the proof accidentally while developing a new algorithm.

Regardless of their intent, the results have excited researchers in the area. “It’s a very, very strong statement,” said Soonwon Choi, a physicist at MIT. “I was very impressed.”

Finding Equilibrium

The team made their discovery while exploring the theoretical capabilities of future quantum computers—machines that will exploit quantum behavior, including entanglement and superposition, to perform certain calculations far faster than the conventional computers we know today.

One of the most promising applications of quantum computing is in the study of quantum physics itself. Let’s say you want to understand the behavior of a quantum system. Researchers need to first develop specific procedures, or algorithms, that quantum computers can use to answer your questions.

Ewin Tang helped devise a new fast algorithm for simulating how certain quantum systems behave at high temperatures.

Photograph: Xinyu Tan

via Wired Top Stories https://www.wired.com

September 22, 2024 at 06:09AM