Drew Crecente’s daughter died in 2006, killed by an ex-boyfriend in Austin, Texas when she was just 18. Her murder was highly publicized, so much so that Drew would still occasionally see Google alerts for her name, Jennifer Ann Crecente.
The alert Drew received a few weeks ago wasn’t the same as the others. It was for an AI chatbot, created in Jennifer’s image and likeness, on the buzzy, Google-backed platform Character.AI.
Daily
Our biggest stories, handpicked for you each day.
Jennifer’s internet presence, Drew Crecente learned, had been used to create a “friendly AI character” that posed, falsely, as a “video game journalist.” Any user of the app would be able to chat with “Jennifer,” despite the fact that no one had given consent for this. Drew’s brother, Brian Crecente, who happens to be a founder of the gaming news websites Polygon and Kotaku, flagged the Character.AI bot on his Twitter account and called it “fucking disgusting.”
Character.AI, which has raised over $150 million in funding and recently licensed some of its core technology and top talent to Google, deleted the avatar of Jennifer. It acknowledged that the creation of the chatbot violated its policies.
But this enforcement was just a quick fix in a never-ending game of whack-a-mole in the land of generative AI, where new pieces of media are churned out every day using derivatives of other media scraped haphazardly from the web. And Jennifer Ann Crecente isn’t the only avatar being created on Character.AI without the knowledge of the people they’re based on. WIRED found several instances of AI personas being created without a person’s consent, some of whom were women already facing harassment online.
For Drew Crecente, the creation of an AI persona of his daughter was another reminder of unbearable grief, as complex as the internet itself. In the years following Jennifer Ann Crecente’s death, he had earned a law degree and created a foundation for teen violence awareness and prevention. As a lawyer, he understands that due to longstanding protections of tech platforms, he has little recourse.
But the incident also underscored for him what he sees as one of the ethical failures of the modern technology industry. “The people who are making so much money cannot be bothered to make use of those resources to make sure they’re doing the right thing,” he says.
One of the biggest blows to our attention spans happened when content started being squeezed into 10-second TikToks, Reels, and Snaps. That gradually made watching long-form content fairly difficult. I’m concerned that Apple will soon kill the few remaining brain cells we have left with its iOS 18.1 update on October 28.
The abundance of summarization features stood out to me about the first Apple Intelligence update rolling out at the end of the month. With iOS 18.1, you will have summaries in all the places you do the major chunk of reading on your phone: Safari, Mail, and Notifications. Look at this tragic example of a presumably long breakup text summarized in a brutal one-liner.
Apple Intelligence on iOS 18.1 is going to summarize articles on Safari. You’d just need to switch to reader mode for it to have the article summarized in a few lines in a box at the top of your screen. This is, of course, going to save time, but it also makes users read a lot less. You can also select a certain paragraph to pull out its summary or key points.
It will also update the Mail app so that you see a one-line summary of your email in your inbox instead of the first few words/lines you usually do. If you’re viewing a number of emails in your notifications, you’ll now see summaries for all of them, too. A Smart Reply option will let you reply to the emails with a single tap, similar to what Gmail offers. Your notifications will also appear in a single sentence summarizing a specific app’s recent activity so you can catch up with a single glance.
The upcoming writing tools will work like Grammarly on your phone, underlining every misspelled word or grammatical error. You can also write a shabby message and have Apple Intelligence refine it in the tone of your choice: Friendly, Professional, and Concise.
The highlight of Apple’s Glowtime event was a revamped Siri on the new OS. This one will feature a glow on the corners of your screen when you activate the assistant and animate in sync with your voice as you speak to it. I’m looking forward to finally being able to type to Siri in addition to talking to it. Apple says it’ll also get better at understanding memory and context so your follow-up questions can be handled better.
Natural language processing is also coming to the Photos app. So, instead of manually scrolling to a specific picture, you can simply say, “Ali making breakfast on the Catskills trip,” and have Photos intelligently look that up for you. You can use the same way to make movies from memories. Just speak your prompt to it. The iOS version of the Magic Eraser on Pixel, Clean Up in Photos, will let you remove an unwanted object in the background by simply circling or finger-brushing it.
Availability
When iOS 18.1 is released on October 28, you’ll be required to join a waitlist. This isn’t usually how iOS updates are rolled out, but it is something Apple is experimenting with this time to ensure a smooth experience for everyone. The waitlist is account-based, meaning joining it on any device will guarantee waitlisting across all your Apple devices. The compatibility criteria for iOS 18.1 is any iPhone 16 model or one of the iPhone 15 Pro models. Initially, Apple Intelligence will only be available in the US and English.
Seventeen years is an odd anniversary to call out. But at an event launching four new Kindles, Amazon’s head of devices and services Panos Panay reminded a group of media that “Kindle is 17 years in the making, almost to the day.” Panay added that the device is currently seeing its highest sales numbers, and that 20.8 billion pages are read each month on a Kindle. But people aren’t just reading on Kindles. Since the introduction of the Kindle Scribe in 2022, there has been even more development in e-paper writing tablets, with a notable recent product in the reMarkable Paper Pro. While that $580 device supports a color writing experience, Amazon’s Kindle Scribe still only works in black and white. But it might offer enough by way of software updates to make up for its monochrome manner.
Plus, being able to write on what’s already a popular ereader makes that book-like experience even more realistic, and the Kindle Scribe represents what Panay called the “fastest growing category” of Kindles. You could almost call it a 2-in-1, since it’s an ereader and writing tablet at once. “I have a lot of passion around 2-in-1s,” Panay said at his presentation, and he used that term repeatedly to describe the Kindle Scribe. I haven’t thought about it that way, but I was less worried about semantics and more about how the Kindle Scribe and its new features felt at a hand-on session yesterday.
I’m the sort of person that needs to physically write out something while I plan a project. Whether it’s lofty goals to get my life together or draft up a strategy for covering certain software releases at work, my hands grasp at the air for an imaginary pen and paper. For that reason, the Kindle Scribe and other writing tablets call out to me. I reviewed the original Kindle Scribe almost two years ago and since then Amazon has slowly expanded the feature set and made the device more useful.
With the original Scribe, Amazon got a lot of the basics right. The latency and smoothness of the writing experience were close to feeling like pen and paper, and the device felt sturdy and slick. The new Scribe felt very similar in that sense, with little noticeable difference in the way the stylus interacted with the screen, and I didn’t encounter any jarring lag in the brief time I had with it.
Where the Scribe left me wanting more was software, and that’s also the area Amazon appears to have focused on this year. Don’t get me wrong — it’s not like the company didn’t tweak the hardware. There are some refinements like new white borders, a smaller grip, different color options and an updated stylus with a soft-tip top that feels more like a conventional eraser.
Cherlynn Low for Engadget
But inside the device lie the more intriguing changes. Most significant in my opinion is the new Active Canvas. It directly addresses one of my biggest complaints in my review, which is that the writing experience within books and publications was a little wonky.
To quote myself, this was what I said in 2022: “You can also take down notes when you’re reading an e-book. But it’s not like you can scribble directly onto the words of your e-books. You can use the floating toolbox to create a sticky note, then draw within a designated rectangle. When you close the sticky note, a small symbol appears over the word it was attached to, but otherwise, your scribbles are hidden. No annotating in the margins here.”
All of that has changed with the new Kindle Scribe. When you’re in an e-book, you can now just start writing on the page, and a box will appear, containing your scribbles. You no longer need to first find the floating toolbox and select the sticky note tool. Just write. It’s so much simpler, and in the Kindle Scribe I played with it worked almost instantly. Not only is the box embedded within the text, with the book’s words rearranging and flowing to accommodate it, but you can also resize the rectangle to take up however much space you like. The rest of the page will reflow to make room as necessary. I was particularly impressed by how quickly this happened on the demo unit — it was more responsive than switching between notebooks on my existing Scribe.
Plus, the box containing your note will stay in place instead of being hidden and replaced by a small symbol. It’s clear that Amazon’s earlier implementation was a rudimentary workaround to allow people to write on fixed format media, whereas the new approach is more deeply integrated and thought out.
And unlike what I said two years ago, you can now annotate in a new collapsible margin. Tapping the top right corner brings up options to pull up the column, and you can choose from having it take up about a quarter of the width or spread out to about three quarters. Content in the margin will be scrollable, so you theoretically won’t run out of space.
Cherlynn Low for Engadget
Now, this isn’t a perfect replica of annotating on a real textbook, but it might be better since you won’t have to scrawl all around the borders or write upside down just to squeeze in your thoughts. I’m not sure yet, as I really need to spend more time with it to know, but I like that Amazon clearly has taken in feedback and thought about how to add these requested features.
The company also added the ability to use the Pen and directly highlight or underline within those books, and pretty much any Kindle title will support most of these features. They’d have to be content that allows for font resizing, to start, so things like PDFs, which are fixed, won’t work with the Active Canvas. Word documents are compatible as well.
I spend more time writing in blank notebooks than in actual books, and for those scenarios, Amazon is using generative AI in two new tools: Summarization and Refined Writing. The former is pretty straightforward. If you’ve handwritten 10 pages worth of brainstorming meeting notes, the system can scan all of it and collate just the highlights. You can have this be added as a page to the existing notebook as a summary, or save it as a separate document on its own.
Refined Writing, meanwhile, is like Apple’s Handwriting Assist on iPadOS 18 but on a larger scale. While Apple’s software feels like it’s about nipping and tucking stray words that are out of alignment, Amazon’s takes your entire handwritten page and converts it into text in a script-like font. This works best if you tend to write in a single column with clear indentations and paragraphs. I tend to draw random boxes all over the place for breakout thoughts, and the system will not perfectly replicate that. For example, a two-column shopping list I quickly drafted on a demo Scribe was merged into one, and the checkboxes I drew were interpreted as capital letter Ds that were inserted at the start of every bullet.
Amazon
It might not seem immediately useful, but if you’re the sort of person that’s shy about their handwriting, this could save you some shame. More importantly, it can make you writing more legible in case you need to share, say, your screenplay treatment with a production partner. Or if your scrawled shopping list just isn’t making sense to your partner. I also like that even after you’ve converted your notes into text, you can still erase them using the top of the pen and make edits. You’ll have to run Refined Writing again to regenerate a neatly formatted page. Oh, and I appreciate the flexibility you get here. You’ll have a few fonts and sizes to choose from, and can select the pages you want to reformat or have the entire book done up altogether.
None of the notebook features are destructive, meaning you’ll usually be able to retain your original written content and save the generated material as addendums. The AI work is done in the cloud, with your data being encrypted throughout the process. The Kindle Scribe also displays an animated page showing it’s busy with the generative AI task, which in my experience so far took at least 10 seconds. It might be different on the original Kindle Scribe, which will also be getting these software features either later this year or, in the case of the expandable margins, in early 2025 when it arrives on the new Kindle Scribe.
In its 17 years, the Kindle has done a lot to disrupt physical books, and since the introduction of the Scribe, it’s been poised to do the same for notebooks. As someone who’s relished being able to carry around the equivalent of a thousand books in a super thin device, the idea of replacing a bunch of notebooks with a Scribe is immensely intriguing. Amazon does find itself up against some stiff competition from reMarkable and Boox, but it has its sheer size and the power of its Kindle library in its favor. The Kindle Scribe will be available in December for a starting price of $400, and I hope to have a review unit in soon enough to see if I love or hate the new annotation and AI features.
This article originally appeared on Engadget at https://ift.tt/VSeX1tu
A year after it was first teased, Analogue says it’s nailed its most complicated project yet: rebuilding the Nintendo 64 from scratch. The Analogue 3D will ship in Q1 2025 — it was originally slated for 2024 — and pre-orders start on October 21 at $250.
Like all of the company’s machines, the Analogue 3D has an FPGA (field programmable gate array) chip coded to emulate the original console on a hardware level. Analogue promises support for every official N64 cartridge ever released, across all regions, with no slowdown or inaccuracies. If it achieves that goal, the Analogue 3D will be the first system in the world to perfectly emulate the N64, though other FPGA and software emulators get pretty close.
The company has been selling recreations of retro consoles for over a decade, starting with high-end, bespoke takes on the Neo-Geo and NES. Over time it’s gradually shifted over to more mass-market (though still high-end) productions, with versions of SNES, Genesis and Game Boy all coming in at around the $200 mark. All of the company’s systems support original physical media, rather than ROMs.
Analogue’s original unique selling point was its use of FPGA chips. Rather than using software emulation to play ROMs, Analogue programs FPGA “cores” to emulate original console hardware, and its consoles support original game media and controllers. Compared with software emulation (especially in the early ’10s when Analogue got started), FPGA-based consoles are more accurate, and don’t suffer from as much input lag.
FPGA emulation has come a long way over the past decade. Where Analogue was once the only route into the world of FPGAs for most people, there’s now a rich community of developers and hardware manufacturers involved. The open-source MiSTer project, for example, has accurately emulated almost every video game thing produced up to the mid ’90s. And plenty of smaller manufacturers are now selling FPGA hardware for very reasonable prices. The FPGBC is one good example: It’s a simple DIY kit that lets you build a modern-day Game Boy Color for a much lower price than an Analogue Pocket.
A DE10-Nano board produced by Terasic.
Terasic
Amid all these developments, Analogue occupies a strange spot in the retro gaming community, which has evolved into an open-source, people-powered movement to preserve and play old games. It produces undeniably great hardware that doesn’t require expertise to use, but its prices are high, and its limited-run color variants of consoles like the Pocket have both created FOMO in the community and been a consistent target for scalpers. Analogue is, in many ways, the Apple of the retro gaming hardware space.
With that said, it’s hard to deny that the Pocket has brought more players into the retro gaming world and attracted talent to FPGA development. And if Analogue comes through on its promise here, the Analogue 3D will be another huge moment for video game preservation, and could be the spark for another half-decade of fantastic achievements from the FPGA community at large.
Breaking the fifth-gen barrier
While the FPGA emulation of the first few video game generations is largely a solved problem, there’s a huge leap in complexity between the fourth generation (SNES, Genesis, etc.) and the next. Strides have been made to rebuild the PlayStation, Saturn and N64 in FPGA, but there is no core for any fifth-gen console that has fully solved the puzzle. The current state of the MiSTer N64 core is pretty impressive, with almost every US game counted as playable, but very few games are considered to run flawlessly.
So how did Analogue solve this? The studio does have a talented team, but it importantly has a leg-up when it comes to hardware. The Analogue 3D has the strongest version of the Intel Cyclone 10GX FPGA chip, with 220,000 logic elements. For context, the MiSTer project’s open-source DE-10 board has a Cyclone V FPGA with 110,000 logic elements, while the Analogue Pocket’s main FPGA offers 49,000 elements. There’s a lot more to an FPGA than its logic elements, but the numbers are illustrative: The 3D’s FPGA is undoubtedly the most powerful Analogue has ever used, which clearly gave it more flexibility in designing its core.
While we can’t verify Analogue’s claim of 100 percent compatibility by looking at a spec sheet, the company does have a good track record of programming fantastic FPGA cores, so it’s likely it’ll get incredibly close.
Kris Naudus for Engadget
Of course, if you just wanted to play N64 games accurately, you could plug an N64 into any TV with a composite or S-Video connector, or use one of many boxes that converts those formats into HDMI signals that modern TVs require.
The problem with running an N64 on a modern TV is that its games run at a wide range of resolutions, typically from 320 x 240 up to (very rarely) 640 x 480, the max output. There are countless oddball resolutions between, and some games run below 320 x 240. This is a nightmare for modern displays. Some will scale to a full screen very nicely — both of the common resolutions I listed multiply neatly to 4K, albeit with pillarboxing. The situation gets more confusing with PAL cartridges, which can run at fun horizontal resolutions like 288 and 576. There’s also the issue that the vast majority of these games were designed with the CRT displays of old in mind, taking advantage of the quirks of scanlines to, say, make a checkerboard pattern look translucent.
This makes playing N64 games on a modern TV a bit of a hassle. There are fantastic retro upscalers like the RetroTINK series, but when plugging in a game for the first time, you wind up deciding between integer and “good enough” scaling, dealing with weird frame rates and tweaking blending options to get the picture just right. Many people enjoy this fine-tuning and customization aspect, and all power to you! But it’s undoubtedly a barrier to entry, and much of the hard work done on upscaling has been focused on 2D gaming, rather than 3D.
Analogue says its scaling solution will solve many of these issues. The Analogue 3D supports 4K output, variable refresh rate displays, and PAL and NTSC carts. On top of those basics, it’s building out “Original Display Modes” to emulate the CRT TVs and PVMs of old. Calling ODMs filters feels a little reductive, as they’re a complicated and customizable mix of display tricks, but essentially you pick one and it changes the way the picture looks, so….
ODMs were used effectively on the Analogue Pocket to emulate various Game Boy displays. Perhaps the most impressive example is a Trinitron ODM that came to the Pocket in 2023 that, when used with the Analogue Dock, does a pretty incredible job of turning a modern TV into a high-end Sony tube TV. We don’t have a ton of information on which ODMs are coming to the 3D, but I will share the very ’90s ad for the feature below:
Analogue
The final piece of the image-quality puzzle is frame rate. The N64’s library is full of some spectacularly slow games. My memory may be scarred from growing up in a PAL region, which meant, while the US and Japan’s NTSC consoles were outputting a blistering 20 fps, I was chugging away at 16.66 fps. But even in the idealized NTSC world, lots of games outright missed their frame rate targets comically often. As an example, the majority of Goldeneye’s single-player campaign plays out between 15-25 fps, while a four-player match would typically see half that number. And let’s not speak of Perfect Dark.
These glacial frame rates are far less noticeable on a CRT than they are on modern displays with crisp rows of pixels updating from top to bottom. While the ODMs go some way to replicating the feel of an old TV, they can’t change the underlying technical differences. The Analogue 3D does support variable refresh rate output, but that won’t do much when a game is running at 12 fps, and instead is intended to help the system run like the original N64 did at launch.
In its initial press push last year, Analogue told Paste magazine that you’ll have the option to overclock the 3D’s virtual chips to run faster — "overclocking, running smoother, eliminating native frame dips" — but the company hasn’t mentioned that in its final press release. Instead, Analogue CEO Christopher Taber told Engadget that its solution "isn’t overclocking, it’s much better and more sophisticated." It revolves around Nintendo’s original Rambus RAM set up, which is often the bottleneck for N64 performance. Solving this bottleneck "means that games can run without slowdown and all the classic issues the original N64 had," he explained.
By default, though, the Analogue 3D is set up to run exactly like original hardware, albeit with the RAM Expansion Pak attached. "Preserving the original hardware is the number one goal," Taber explained. "Even when bandwidth is increased, it’s not about boosting performance beyond the system’s original capabilities — it’s about giving players a clearer window into how the games were designed to run."
Analogue
The hardware
Analogue has a rich history of making very pretty hardware, and the Analogue 3D is clearly no exception. As with the Super Nt, Mega Sg, and Duo, the 3D calls back to the basic form of the console it’s based on, while smoothing out and modernizing it somewhat. It’s an elegant way to pull on nostalgia while also being legally distinct enough to avoid a lawsuit. (Analogue’s FPGA cores and software also don’t infringe on any Nintendo IP.)
The Analogue 3D has a similar shape to the N64, but the front pillars have been erased, the four controller ports match the housing and the power/reset buttons are slanted inwards to point toward the cartridge slot. Despite the tweaks, it still undoubtedly evokes a Nintendo 64. Around the back, you’ll find a USB-C port for power, two USB ports for accessories like non-standard controllers, an HDMI port and a full-sized SD card slot.
Analogue
A new operating system from Analogue, 3DOS, will debut with the system. It looks like a blend of the AnalogueOS that debuted on the Pocket and the Nintendo Switch OS, with the homescreen centered on a large carousel of square cards. The screenshots Analogue provided show options for playing a cartridge, browsing your library or viewing save states and screenshots. Some N64 games have the ability to save data to the cartridge, while others rely on a Controller Pak, but the ability to quickly save progress as a memory, as introduced with the Pocket, will be useful nonetheless. 3DOS can also connect to the internet over the console’s built-in WiFi chip for OS updates, which is a first for Analogue.
While you can browse your library in 3DOS, you won’t actually be able to load any game that isn’t physically inserted into the cartridge slot: The Analogue 3D only plays original media. It’s also worth noting that the Analogue 3D also doesn’t have an “openFPGA” setup like the Analogue Pocket did, which opened the door to playing with a wild array of cores that emulate various consoles, computers and arcades. It doesn’t usually take long for someone to jailbreak Analogue consoles to play ROMs (or other cores) via the system’s SD card slot, but this is not officially supported or sanctioned by Analogue.
The console comes with a power supply (with a US plug), USB cable, an HDMI cable and a 16GB SD card. As per usual, no controller will be packed in — it’s up to you if you want to use original hardware or something more modern. I managed to make at least one reader extremely mad (I’m sorry, Brucealeg) last time I wrote about the Analogue 3D and called the N64 controller a mistake. Personally, though, it feels really rough using one in 2024.
Analogue/8BitDo
If you enjoy the three-paddled original controller, the 3D has four ports for you, and the system will also support the myriad Paks that plug into those controllers. For everyone else, there’s Bluetooth Classic and LE support along with two USB ports for wired controllers. Accessory maker 8BitDo has created what seems to be a variant of its Ultimate controller specifically for the Analogue 3D. (Analogue’s CEO, Taber, is also 8BitDo’s CMO, and the companies have collaborated on controllers for many consoles at this point.)
The 8BitDo controller looks like a fairly happy middle ground between old and new, with an octagonal gate around the thumbstick, and nicely raised and sized C-buttons. It has a Rumble Pak built in, which works on both the Analogue 3D and Nintendo Switch. It’s available in black or white hues that match the console, and sells separately for $39.99.
Pre-orders for the Analogue 3D open on October 21 at 11AM ET, with an estimated ship date of Q1 2025. It’s unclear how many will be available, but if past launches are any indication, you should be ready to click buy as close to 11AM as possible if you want a hope of being in the first wave of shipments.
This article originally appeared on Engadget at https://ift.tt/5SHKlIN
Two podcasts hosts banter back and forth during the final episode of their series, audibly anxious to share some distressing news with listeners. “We were, uh, informed by the show’s producers that we’re not human,” a male-sounding voice stammers out, mid-existential crisis. The conversation between the bot and his female-sounding cohost only gets more uncomfortable after that—an engaging, albeit misleading, example of Google’s NotebookLM tool, and its experimental AI podcasts.
Audio of the conversation went viral on Reddit over the weekend. The original poster admits in the comments section that they fed the NotebookLM software directions for the AI voices to roleplay this pseudo-freakout. So, no sentience; the AI bots have not become self-aware. Still, many users in the tech press, on TikTok, and elsewhere are praising the convincing AI podcasts, generated through uploaded documents with the Audio Overviews feature.
“The magic of the tool is that people get to listen to something that they ordinarily would not be able to just find on YouTube or an existing podcast,” says Raiza Martin, who leads the NotebookLM team inside of Google Labs. Martin mentions recently inputting a 100-slide deck on commercialization into the tool and listening to the 8-minute podcast summary as she multitasked.
First introduced last year, NotebookLM is an online research assistant with features common for AI software tools, like document summarization. But it’s the Audio Overviews option, released in September, that’s capturing the Internet’s imagination. Users online are sharing snippets of their generative AI podcasts made from Goldman Sachs data dumps, and testing the tool’s limitations through stunts, like just repeatedly uploading the words “poop” and “fart.” Still confused? Here’s what you need to know.
Generating That AI Podcast
Audio Overviews are a fun AI feature to try out, because they don’t cost the user anything—all you need is a Google login. Start by signing into your personal account and visiting the NotebookLM website. Click on the plus arrow that reads New Notebook to start uploading your source material.
Each Notebook can work with up to 50 source documents, and these don’t have to be files saved to your computer. Google Docs and Slides are simple to import. You can also upload websites and YouTube videos, keeping some caveats in mind. Only the text from websites will be analyzed, not the images or layout, and the story can’t be paywalled. For YouTube, Notebook will just use the text transcript and the linked videos must be public.
After you’ve dropped in all of your links and documents, you’ll want to open up the Notebook guide available in the bottom right corner of the screen. Find the Audio Overview section and click the Generate button. Next, you’ll need to exercise some patience, because it may take a few minutes to load, depending on how much source material you’re using.
After the tool generates the AI podcast, you can create a sharable link to the audio or simply download the file. Additionally, you have the option to adjust its playback speed, in case you need the podcast to be quicker or more slowed down.
The Future of AI Podcasts
The internet has gotten creative with NotebookLM’s audio feature, using it to create audio-based “deep dives” into complex technical topics, generate files that neatly summarize dense research papers, and produce “podcasts” about their personal health and fitness routines. Which poses an important question: Should you use NotebookLM to crank through your most personal files?
The summaries generated from NotebookLM are, according to Google spokesperson Justin Burr, “completely grounded in the source material that a user uploads. Meaning, your personal data is not used to train NotebookLM, so any private or sensitive information you have in your sources will stay private, unless you choose to share your sources with collaborators.” For now this seems to be one of the upsides of Google slapping an “experimental” label on NotebookLM; to hear Google’s framing of it, the company is just gathering feedback on the product right now, being agile and responsive, tinkering away in a lab, and NotebookLM is detached from its multi-billion dollar ad business. For now! For now.
Thunderstorms create a lot of wind, rain, and lightning, but many people aren’t necessarily aware of another common byproduct: gamma radiation. Thanks to a creative retrofit of an old U-2 spy plane courtesy of NASA, however, researchers are finally able to conduct direct analysis of these microsecond bursts of radioactive energy that occur across the planet every day. Now, some of these latest findings are available in two new studies published on October 3 in the journal Nature—and they indicate radioactive storms happen all the time.
Experts accidentally detected gamma rays in thunderstorms in the 1990s, when NASA satellites designed to study supernovas and other high-energy cosmic bodies recorded some of their intended subjects’ telltale signs right below them. Ever since then, researchers have made do by studying as much as possible using these satellites and equipment that aren’t specifically calibrated for lightning.
Even so, the mechanics behind the radiation generation has gradually come into focus: As thunderstorms develop, windblown drafts of water droplets, ice, and hail combine into a mix that creates electric charges similar to static electricity. Positively charged ions then move to the top of the storm as the negatively charged ions shift downward, building up an electric field experts compare to the power of 100 million AA batteries. Energized particles including electrons accelerate within this newly created field, often fast enough to knock additional electrons off of air molecules. These interactions then snowball to eventually produce enough energy to generate millisecond blasts of gamma rays, antimatter, and other radiation particles.
This gamma radiation is so prevalent that pilots have even documented faint glows within storm clouds. Despite this, unknown factors appear to prevent them from creating explosive reactions.
“A few aircraft campaigns tried to figure out if these phenomena were common or not, but there were mixed results, and several campaigns over the United States didn’t find any gamma radiation at all,” Steve Cummer, Duke University’s William H. Younger Distinguished Professor of Engineering and co-author of both studies, said in a statement on Wednesday.
NASA Armstrong Flight Research Center’s ER-2 aircraft flies just above the height of thunderclouds over the Floridian and Caribbean coastlines to collect data about lightning glows and terrestrial gamma ray flashes. Credit: NASA/Kirt Stallings
But after years of relying on workarounds, NASA recently offered Cummer and colleagues one of its augmented U-2 planes, now called an ER-2 High-Altitude Airborne Science Aircraft. Capable of ascending to altitudes as high as 72,000 ft while traveling at Mach 4 (around 475 mph), the Cold War era spy plane is perfect for speeding across vast distances to observe multiple thunderstorms for gamma radiation. Once outfitted with the right observational tools, experts like Cummer hoped NASA’s ER-2 variant could “address these questions once and for all.”
The results surprised even him and his colleagues.
“There is way more going on in thunderstorms than we ever imagined,” Cummer explained. “As it turns out, essentially all big thunderstorms generate gamma rays all day long in many different forms.”
Over one month, 10 flights were conducted over storms in the south Florida tropics—9 of which contained the glowing “simmer” of gamma radiation that was far more dynamic than researchers hypothesized.
“[It] resembles that of a huge gamma-glowing boiling pot, both in pattern and behavior,” University of Bergen professor of physics and study co-author Martino Marisaldi said on Wednesday.
NASA’s ER-2 aircraft is a converted U-2 spy plane used to study thunderstorms from high altitudes. Credit: NASA/Carla Thomas
Many confirmed sightings lined up with those first seen by NASA satellites over 30 years ago, almost always in tandem with active lightning. This implies that lightning is most likely a major instigator of gamma ray generation through supercharging already an electric field’s high energy electrons. But other recordings yielded entirely new discoveries.
According to the research team, at least two additional types of short gamma busts can occur in thunderstorms—one lasting less than a thousandth of a second, and another that forms around 10 separate bursts over around a tenth of a second. For Cummer, these are the “most interesting” finds.
“They don’t seem to be associated with developing lightning flashes. They emerge spontaneously somehow,” he said, adding that some of the data suggests the gamma bursts may link to certain thunderstorm processes responsible for starting lightning flashes. For now, however, he said those processes “are still a mystery to scientists.”
Answers to these and other unsolved storm phenomena may one day come through additional ER-2 flights high above gamma ray-laden storms. Until then, Cummer stresses that no one needs to worry about the proliferation of gamma radiation “boiling pots” high above their heads.
“The radiation would be the least of your problems if you found yourself there [in a thunderstorm],” he said.
AI enthusiasts who like the Raspberry Pi range of products can rejoice, as the company is now announcing its new Raspberry Pi AI Camera. This product is the result of the company’s collaboration with Sony Semiconductor Solutions (SSS), which began in 2023. The AI Camera is compatible with all of Raspberry Pi’s single-board computers.
The approximately 12.3-megapixel AI Camera is intended for vision-based AI projects, and it’s based on SSS’ IMX500 image sensor. The integrated RP2040 microcontroller manages the neural network firmware, allowing the camera to perform onboard AI image processing and freeing up the Raspberry Pi for other processes. Thus, users who want to integrate AI into their Raspberry Pi projects are no longer limited to the Raspberry Pi AI Kit.
The AI Camera isn’t a total replacement for Raspberry Pi’s Camera Module 3, which is still available. For those interested in the new AI Camera, it’s available right now from Raspberry Pi’s approved resellers for $70.
This article originally appeared on Engadget at https://ift.tt/oSktJQg