To Make Nuclear Fusion a Reliable Energy Source, We Will Need Heat-and Radiation-Resilient Materials

https://www.discovermagazine.com/the-sciences/to-make-nuclear-fusion-a-reliable-energy-source-we-will-need-heat-and

Fusion energy has the potential to be an effective clean energy source, as its reactions generate incredibly large amounts of energy. Fusion reactors aim to reproduce on Earth what happens in the core of the Sun, where very light elements merge and release energy in the process. Engineers can harness this energy to heat water and generate electricity through a steam turbine, but the path to fusion isn’t completely straightforward.

Controlled nuclear fusion has several advantages over other power sources for generating electricity. For one, the fusion reaction itself doesn’t produce any carbon dioxide. There is no risk of meltdown, and the reaction doesn’t generate any long-lived radioactive waste.

I’m a nuclear engineer who studies materials that scientists could use in fusion reactors. Fusion takes place at incredibly high temperatures. So to one day make fusion a feasible energy source, reactors will need to be built with materials that can survive the heat and irradiation generated by fusion reactions.

(Credit: xia yuan/Moment via Getty Images)
3D rendering of the inside of a fusion reactor chamber.

Fusion Material Challenges

Several types of elements can merge during a fusion reaction. The one most scientists prefer is deuterium plus tritium. These two elements have the highest likelihood of fusing at temperatures that a reactor can maintain. This reaction generates a helium atom and a neutron, which carries most of the energy from the reaction.

(Credit: Sophie Blondel/UT Knoxville)
In the D-T fusion reaction, two hydrogen isotopes, deuterium and tritium, fuse and produce a helium atom and a high-energy neutron.

Humans have successfully generated fusion reactions on Earth since 1952– some even in their garage. But the trick now is to make it worth it. You need to get more energy out of the process than you put in to initiate the reaction.

Fusion reactions happen in a very hot plasma, which is a state of matter similar to gas but made of charged particles. The plasma needs to stay extremely hot – over 100 million degrees Celsius – and condensed for the duration of the reaction.

To keep the plasma hot and condensed and create a reaction that can keep going, you need special materials making up the reactor walls. You also need a cheap and reliable source of fuel.

While deuterium is very common and obtained from water, tritium is very rare. A 1-gigawatt fusion reactor is expected to burn 56 kilograms of tritium annually. However, the world has only about 25 kilograms of tritium commercially available.

Researchers need to find alternative sources for tritium before fusion energy can get off the ground. One option is to have each reactor generating its own tritium through a system called the breeding blanket.

The breeding blanket makes up the first layer of the plasma chamber walls and contains lithium that reacts with the neutrons generated in the fusion reaction to produce tritium. The blanket also converts the energy carried by these neutrons to heat.

The fusion reaction chamber at ITER will electrify the plasma.


Fusion devices also need a divertor, which extracts the heat and ash produced in the reaction. The divertor helps keep the reactions going for longer.

These materials will be exposed to unprecedented levels of heat and particle bombardment. And there aren’t currently any experimental facilities to reproduce these conditions and test materials in a real-world scenario. So, the focus of my research is to bridge this gap using models and computer simulations.

From the Atom to Full Device

My colleagues and I work on producing tools that can predict how the materials in a fusion reactor erode, and how their properties change when they are exposed to extreme heat and lots of particle radiation.

As they get irradiated, defects can form and grow in these materials, which affect how well they react to heat and stress. In the future, we hope that government agencies and private companies can use these tools to design fusion power plants.

Our approach, called multiscale modeling, consists of looking at the physics in these materials over different time and length scales with a range of computational models.

We first study the phenomena happening in these materials at the atomic scale through accurate but expensive simulations. For instance, one simulation might examine how hydrogen moves within a material during irradiation.

From these simulations, we look at properties such as diffusivity, which tells us how much the hydrogen can spread throughout the material.

We can integrate the information from these atomic level simulations into less expensive simulations, which look at how the materials react at a larger scale. These larger-scale simulations are less expensive because they model the materials as a continuum instead of considering every single atom.

The atomic-scale simulations could take weeks to run on a supercomputer, while the continuum one will take only a few hours.

In the multiscale modeling approach, researchers use atom-level simulations, then take the parameters they find and apply them to larger-scale simulations, and then compare their results with experimental results. If the results don’t match, they go back to the atomic scale to study missing mechanisms. Sophie Blondel/UT Knoxville, adapted from https://ift.tt/IsCbDBe

All this modeling work happening on computers is then compared with experimental results obtained in laboratories.

For example, if one side of the material has hydrogen gas, we want to know how much hydrogen leaks to the other side of the material. If the model and the experimental results match, we can have confidence in the model and use it to predict the behavior of the same material under the conditions we would expect in a fusion device.

If they don’t match, we go back to the atomic-scale simulations to investigate what we missed.

Additionally, we can couple the larger-scale material model to plasma models. These models can tell us which parts of a fusion reactor will be the hottest or have the most particle bombardment. From there, we can evaluate more scenarios.

For instance, if too much hydrogen leaks through the material during the operation of the fusion reactor, we could recommend making the material thicker in certain places or adding something to trap the hydrogen.

Designing New Materials

As the quest for commercial fusion energy continues, scientists will need to engineer more resilient materials. The field of possibilities is daunting – engineers can manufacture multiple elements together in many ways.

You could combine two elements to create a new material, but how do you know what the right proportion is of each element? And what if you want to try mixing five or more elements together? It would take way too long to try to run our simulations for all of these possibilities.

Thankfully, artificial intelligence is here to assist. By combining experimental and simulation results, analytical AI can recommend combinations that are most likely to have the properties we’re looking for, such as heat and stress resistance.

The aim is to reduce the number of materials that an engineer would have to produce and test experimentally to save time and money.


Sophie Blondel is a Research Assistant Professor of Nuclear Engineering at the University of Tennessee. This article is republished from The Conversation under a Creative Commons license. Read the original article.

via Discover Main Feed https://ift.tt/JzqQC4V

October 26, 2024 at 09:18AM

The Analogue 3D Plays N64 Games In 4K For $250

https://kotaku.com/analogue-3d-nintendo-64-price-release-date-pre-order-1851673898

Analogue 3D is the retro console maker’s take on the Nintendo 64. For $250 it will play your old The Legend of Zelda: Ocarina of Time and Banjo-Kazooie cartridges in 4K. It was supposed to arrive this year but has unfortunately been delayed until 2025.

Boutique retro console maker Analogue, which teased the console last year, is billing it as “The unmistakable Signature and Soul of the CRT. On your HDTV. In 4K.” That’s a tall order, but selling retro magic has been the company’s ethos for years, beginning with the SNES-inspired Super NT, and then the Analogue Pocket, based on the Game Boy and Game Boy Advance.

Like those systems, the Analogue 3D uses FGPA technology to recreate the N64 at the hardware level rather than relying on emulation and will be compatible with all N64 cartridges (you’ll need to own the physical games rather than use ROMs). It also includes four original controller ports for your old, tri-pronged paddles, and comes with a new operating system that should make navigating menus and taking screenshots a much breezier experience.

“Analogue has developed the first, ultimate solution to play N64 today in total fucking glory,” CEO Christopher Taber told Kotaku last year. “Despite all of the amazing love, effort and work that developers have put into the software emulation of the N64—it results in an experience that feels unjustly and wrongly aged—generally ‘off.’ Flatly not a good experience. Coupled with Original Display Modes recreating not just the video game system, but the other critical contextual pieces (analog televisions, CRTs)—it’s difficult to overstate how fucking mint Analogue 3D is.”

While $250 covers the price of the console, an 8BitDo 64 controller will be sold separately that connects wirelessly for just $40. The N64 was home to GoldenEye 007, Mario Kart 64, the original Super Smash Bros., and tons of Mario Party games. Analogue touts it as one of the best retro couch co-op consoles of all time. Hopefully the 8BitDo 64 controller can support that dream with limited lag for anyone who doesn’t have their old N64 controllers lying around, or, more likely, busted the center joysticks on them long ago.

The Analogue 3D comes in both white and black, and pre-orders will be available starting on October 21 at 11:00 a.m. ET. The company’s CEO, Christopher Taber, previously told Kotaku that since its not a limited-edition product, “we will produce to meet demand and continue to make Analogue 3D available into the foreseeable future.” It’s a shame it won’t make it out in time for the holidays, but the company says orders will start shipping out in the first quarter of 2025.

      

via Kotaku https://kotaku.com

October 16, 2024 at 11:07AM

Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them

https://www.wired.com/story/characterai-has-a-non-consensual-bot-problem/

Drew Crecente’s daughter died in 2006, killed by an ex-boyfriend in Austin, Texas when she was just 18. Her murder was highly publicized, so much so that Drew would still occasionally see Google alerts for her name, Jennifer Ann Crecente.

The alert Drew received a few weeks ago wasn’t the same as the others. It was for an AI chatbot, created in Jennifer’s image and likeness, on the buzzy, Google-backed platform Character.AI.

Daily

Our biggest stories, handpicked for you each day.

Jennifer’s internet presence, Drew Crecente learned, had been used to create a “friendly AI character” that posed, falsely, as a “video game journalist.” Any user of the app would be able to chat with “Jennifer,” despite the fact that no one had given consent for this. Drew’s brother, Brian Crecente, who happens to be a founder of the gaming news websites Polygon and Kotaku, flagged the Character.AI bot on his Twitter account and called it “fucking disgusting.”

Character.AI, which has raised over $150 million in funding and recently licensed some of its core technology and top talent to Google, deleted the avatar of Jennifer. It acknowledged that the creation of the chatbot violated its policies.

But this enforcement was just a quick fix in a never-ending game of whack-a-mole in the land of generative AI, where new pieces of media are churned out every day using derivatives of other media scraped haphazardly from the web. And Jennifer Ann Crecente isn’t the only avatar being created on Character.AI without the knowledge of the people they’re based on. WIRED found several instances of AI personas being created without a person’s consent, some of whom were women already facing harassment online.

For Drew Crecente, the creation of an AI persona of his daughter was another reminder of unbearable grief, as complex as the internet itself. In the years following Jennifer Ann Crecente’s death, he had earned a law degree and created a foundation for teen violence awareness and prevention. As a lawyer, he understands that due to longstanding protections of tech platforms, he has little recourse.

But the incident also underscored for him what he sees as one of the ethical failures of the modern technology industry. “The people who are making so much money cannot be bothered to make use of those resources to make sure they’re doing the right thing,” he says.

via Wired Top Stories https://www.wired.com

October 15, 2024 at 03:33PM

Apple Intelligence Is Set to Make Your Attention Span Even Shorter

https://gizmodo.com/apple-intelligence-is-set-to-make-your-attention-span-even-shorter-2000510763

One of the biggest blows to our attention spans happened when content started being squeezed into 10-second TikToks, Reels, and Snaps. That gradually made watching long-form content fairly difficult. I’m concerned that Apple will soon kill the few remaining brain cells we have left with its iOS 18.1 update on October 28.

The abundance of summarization features stood out to me about the first Apple Intelligence update rolling out at the end of the month. With iOS 18.1, you will have summaries in all the places you do the major chunk of reading on your phone: Safari, Mail, and Notifications. Look at this tragic example of a presumably long breakup text summarized in a brutal one-liner.

Apple Intelligence on iOS 18.1 is going to summarize articles on Safari. You’d just need to switch to reader mode for it to have the article summarized in a few lines in a box at the top of your screen. This is, of course, going to save time, but it also makes users read a lot less. You can also select a certain paragraph to pull out its summary or key points.

It will also update the Mail app so that you see a one-line summary of your email in your inbox instead of the first few words/lines you usually do. If you’re viewing a number of emails in your notifications, you’ll now see summaries for all of them, too. A Smart Reply option will let you reply to the emails with a single tap, similar to what Gmail offers. Your notifications will also appear in a single sentence summarizing a specific app’s recent activity so you can catch up with a single glance.

The upcoming writing tools will work like Grammarly on your phone, underlining every misspelled word or grammatical error. You can also write a shabby message and have Apple Intelligence refine it in the tone of your choice: Friendly, Professional, and Concise.

The highlight of Apple’s Glowtime event was a revamped Siri on the new OS. This one will feature a glow on the corners of your screen when you activate the assistant and animate in sync with your voice as you speak to it. I’m looking forward to finally being able to type to Siri in addition to talking to it. Apple says it’ll also get better at understanding memory and context so your follow-up questions can be handled better.

Natural language processing is also coming to the Photos app. So, instead of manually scrolling to a specific picture, you can simply say, “Ali making breakfast on the Catskills trip,” and have Photos intelligently look that up for you. You can use the same way to make movies from memories. Just speak your prompt to it. The iOS version of the Magic Eraser on Pixel, Clean Up in Photos, will let you remove an unwanted object in the background by simply circling or finger-brushing it.

Availability

When iOS 18.1 is released on October 28, you’ll be required to join a waitlist. This isn’t usually how iOS updates are rolled out, but it is something Apple is experimenting with this time to ensure a smooth experience for everyone. The waitlist is account-based, meaning joining it on any device will guarantee waitlisting across all your Apple devices. The compatibility criteria for iOS 18.1 is any iPhone 16 model or one of the iPhone 15 Pro models. Initially, Apple Intelligence will only be available in the US and English.

via Gizmodo https://gizmodo.com/

October 15, 2024 at 02:39PM

Kindle Scribe hands-on: You can scribble on your books

https://www.engadget.com/mobile/tablets/kindle-scribe-hands-on-you-can-scribble-on-your-books-130043335.html?src=rss

Seventeen years is an odd anniversary to call out. But at an event launching four new Kindles, Amazon’s head of devices and services Panos Panay reminded a group of media that “Kindle is 17 years in the making, almost to the day.” Panay added that the device is currently seeing its highest sales numbers, and that 20.8 billion pages are read each month on a Kindle. But people aren’t just reading on Kindles. Since the introduction of the Kindle Scribe in 2022, there has been even more development in e-paper writing tablets, with a notable recent product in the reMarkable Paper Pro. While that $580 device supports a color writing experience, Amazon’s Kindle Scribe still only works in black and white. But it might offer enough by way of software updates to make up for its monochrome manner.

Plus, being able to write on what’s already a popular ereader makes that book-like experience even more realistic, and the Kindle Scribe represents what Panay called the “fastest growing category” of Kindles. You could almost call it a 2-in-1, since it’s an ereader and writing tablet at once. “I have a lot of passion around 2-in-1s,” Panay said at his presentation, and he used that term repeatedly to describe the Kindle Scribe. I haven’t thought about it that way, but I was less worried about semantics and more about how the Kindle Scribe and its new features felt at a hand-on session yesterday.

I’m the sort of person that needs to physically write out something while I plan a project. Whether it’s lofty goals to get my life together or draft up a strategy for covering certain software releases at work, my hands grasp at the air for an imaginary pen and paper. For that reason, the Kindle Scribe and other writing tablets call out to me. I reviewed the original Kindle Scribe almost two years ago and since then Amazon has slowly expanded the feature set and made the device more useful.

With the original Scribe, Amazon got a lot of the basics right. The latency and smoothness of the writing experience were close to feeling like pen and paper, and the device felt sturdy and slick. The new Scribe felt very similar in that sense, with little noticeable difference in the way the stylus interacted with the screen, and I didn’t encounter any jarring lag in the brief time I had with it.

Where the Scribe left me wanting more was software, and that’s also the area Amazon appears to have focused on this year. Don’t get me wrong — it’s not like the company didn’t tweak the hardware. There are some refinements like new white borders, a smaller grip, different color options and an updated stylus with a soft-tip top that feels more like a conventional eraser.

The Amazon Kindle Scribe on a table, with a hand holding a pen to its screen, erasing some words.
Cherlynn Low for Engadget

But inside the device lie the more intriguing changes. Most significant in my opinion is the new Active Canvas. It directly addresses one of my biggest complaints in my review, which is that the writing experience within books and publications was a little wonky.

To quote myself, this was what I said in 2022: “You can also take down notes when you’re reading an e-book. But it’s not like you can scribble directly onto the words of your e-books. You can use the floating toolbox to create a sticky note, then draw within a designated rectangle. When you close the sticky note, a small symbol appears over the word it was attached to, but otherwise, your scribbles are hidden. No annotating in the margins here.”

All of that has changed with the new Kindle Scribe. When you’re in an e-book, you can now just start writing on the page, and a box will appear, containing your scribbles. You no longer need to first find the floating toolbox and select the sticky note tool. Just write. It’s so much simpler, and in the Kindle Scribe I played with it worked almost instantly. Not only is the box embedded within the text, with the book’s words rearranging and flowing to accommodate it, but you can also resize the rectangle to take up however much space you like. The rest of the page will reflow to make room as necessary. I was particularly impressed by how quickly this happened on the demo unit — it was more responsive than switching between notebooks on my existing Scribe.

Plus, the box containing your note will stay in place instead of being hidden and replaced by a small symbol. It’s clear that Amazon’s earlier implementation was a rudimentary workaround to allow people to write on fixed format media, whereas the new approach is more deeply integrated and thought out.

And unlike what I said two years ago, you can now annotate in a new collapsible margin. Tapping the top right corner brings up options to pull up the column, and you can choose from having it take up about a quarter of the width or spread out to about three quarters. Content in the margin will be scrollable, so you theoretically won’t run out of space.

The Amazon Kindle Scribe on a shelf with its screen facing out and the companion pen attached magnetically to its right side.
Cherlynn Low for Engadget

Now, this isn’t a perfect replica of annotating on a real textbook, but it might be better since you won’t have to scrawl all around the borders or write upside down just to squeeze in your thoughts. I’m not sure yet, as I really need to spend more time with it to know, but I like that Amazon clearly has taken in feedback and thought about how to add these requested features.

The company also added the ability to use the Pen and directly highlight or underline within those books, and pretty much any Kindle title will support most of these features. They’d have to be content that allows for font resizing, to start, so things like PDFs, which are fixed, won’t work with the Active Canvas. Word documents are compatible as well.

I spend more time writing in blank notebooks than in actual books, and for those scenarios, Amazon is using generative AI in two new tools: Summarization and Refined Writing. The former is pretty straightforward. If you’ve handwritten 10 pages worth of brainstorming meeting notes, the system can scan all of it and collate just the highlights. You can have this be added as a page to the existing notebook as a summary, or save it as a separate document on its own.

Refined Writing, meanwhile, is like Apple’s Handwriting Assist on iPadOS 18 but on a larger scale. While Apple’s software feels like it’s about nipping and tucking stray words that are out of alignment, Amazon’s takes your entire handwritten page and converts it into text in a script-like font. This works best if you tend to write in a single column with clear indentations and paragraphs. I tend to draw random boxes all over the place for breakout thoughts, and the system will not perfectly replicate that. For example, a two-column shopping list I quickly drafted on a demo Scribe was merged into one, and the checkboxes I drew were interpreted as capital letter Ds that were inserted at the start of every bullet.

A composite image showing the Kindle Scribe's new summarization tools.
Amazon

It might not seem immediately useful, but if you’re the sort of person that’s shy about their handwriting, this could save you some shame. More importantly, it can make you writing more legible in case you need to share, say, your screenplay treatment with a production partner. Or if your scrawled shopping list just isn’t making sense to your partner. I also like that even after you’ve converted your notes into text, you can still erase them using the top of the pen and make edits. You’ll have to run Refined Writing again to regenerate a neatly formatted page. Oh, and I appreciate the flexibility you get here. You’ll have a few fonts and sizes to choose from, and can select the pages you want to reformat or have the entire book done up altogether.

None of the notebook features are destructive, meaning you’ll usually be able to retain your original written content and save the generated material as addendums. The AI work is done in the cloud, with your data being encrypted throughout the process. The Kindle Scribe also displays an animated page showing it’s busy with the generative AI task, which in my experience so far took at least 10 seconds. It might be different on the original Kindle Scribe, which will also be getting these software features either later this year or, in the case of the expandable margins, in early 2025 when it arrives on the new Kindle Scribe.

In its 17 years, the Kindle has done a lot to disrupt physical books, and since the introduction of the Scribe, it’s been poised to do the same for notebooks. As someone who’s relished being able to carry around the equivalent of a thousand books in a super thin device, the idea of replacing a bunch of notebooks with a Scribe is immensely intriguing. Amazon does find itself up against some stiff competition from reMarkable and Boox, but it has its sheer size and the power of its Kindle library in its favor. The Kindle Scribe will be available in December for a starting price of $400, and I hope to have a review unit in soon enough to see if I love or hate the new annotation and AI features.

This article originally appeared on Engadget at https://ift.tt/VSeX1tu

via Engadget http://www.engadget.com

October 16, 2024 at 08:04AM

Analogue’s 4K remake of the N64 is almost ready, and it’s a big deal

https://www.engadget.com/gaming/analogues-4k-remake-of-the-n64-is-almost-ready-and-its-a-big-deal-150033468.html?src=rss

A year after it was first teased, Analogue says it’s nailed its most complicated project yet: rebuilding the Nintendo 64 from scratch. The Analogue 3D will ship in Q1 2025 — it was originally slated for 2024 — and pre-orders start on October 21 at $250.

Like all of the company’s machines, the Analogue 3D has an FPGA (field programmable gate array) chip coded to emulate the original console on a hardware level. Analogue promises support for every official N64 cartridge ever released, across all regions, with no slowdown or inaccuracies. If it achieves that goal, the Analogue 3D will be the first system in the world to perfectly emulate the N64, though other FPGA and software emulators get pretty close.

The company has been selling recreations of retro consoles for over a decade, starting with high-end, bespoke takes on the Neo-Geo and NES. Over time it’s gradually shifted over to more mass-market (though still high-end) productions, with versions of SNES, Genesis and Game Boy all coming in at around the $200 mark. All of the company’s systems support original physical media, rather than ROMs.

Analogue’s original unique selling point was its use of FPGA chips. Rather than using software emulation to play ROMs, Analogue programs FPGA “cores” to emulate original console hardware, and its consoles support original game media and controllers. Compared with software emulation (especially in the early ’10s when Analogue got started), FPGA-based consoles are more accurate, and don’t suffer from as much input lag.

FPGA emulation has come a long way over the past decade. Where Analogue was once the only route into the world of FPGAs for most people, there’s now a rich community of developers and hardware manufacturers involved. The open-source MiSTer project, for example, has accurately emulated almost every video game thing produced up to the mid ’90s. And plenty of smaller manufacturers are now selling FPGA hardware for very reasonable prices. The FPGBC is one good example: It’s a simple DIY kit that lets you build a modern-day Game Boy Color for a much lower price than an Analogue Pocket.

A DE10-Nano board produced by Terasic.
A DE10-Nano board produced by Terasic.
Terasic

Amid all these developments, Analogue occupies a strange spot in the retro gaming community, which has evolved into an open-source, people-powered movement to preserve and play old games. It produces undeniably great hardware that doesn’t require expertise to use, but its prices are high, and its limited-run color variants of consoles like the Pocket have both created FOMO in the community and been a consistent target for scalpers. Analogue is, in many ways, the Apple of the retro gaming hardware space.

With that said, it’s hard to deny that the Pocket has brought more players into the retro gaming world and attracted talent to FPGA development. And if Analogue comes through on its promise here, the Analogue 3D will be another huge moment for video game preservation, and could be the spark for another half-decade of fantastic achievements from the FPGA community at large.

Breaking the fifth-gen barrier

While the FPGA emulation of the first few video game generations is largely a solved problem, there’s a huge leap in complexity between the fourth generation (SNES, Genesis, etc.) and the next. Strides have been made to rebuild the PlayStation, Saturn and N64 in FPGA, but there is no core for any fifth-gen console that has fully solved the puzzle. The current state of the MiSTer N64 core is pretty impressive, with almost every US game counted as playable, but very few games are considered to run flawlessly.

So how did Analogue solve this? The studio does have a talented team, but it importantly has a leg-up when it comes to hardware. The Analogue 3D has the strongest version of the Intel Cyclone 10GX FPGA chip, with 220,000 logic elements. For context, the MiSTer project’s open-source DE-10 board has a Cyclone V FPGA with 110,000 logic elements, while the Analogue Pocket’s main FPGA offers 49,000 elements. There’s a lot more to an FPGA than its logic elements, but the numbers are illustrative: The 3D’s FPGA is undoubtedly the most powerful Analogue has ever used, which clearly gave it more flexibility in designing its core.

While we can’t verify Analogue’s claim of 100 percent compatibility by looking at a spec sheet, the company does have a good track record of programming fantastic FPGA cores, so it’s likely it’ll get incredibly close.

Nintendo 64 with Zelda, Mario Kart 64, Perfect Dark and GoldenEye 007
Kris Naudus for Engadget

Of course, if you just wanted to play N64 games accurately, you could plug an N64 into any TV with a composite or S-Video connector, or use one of many boxes that converts those formats into HDMI signals that modern TVs require.

The problem with running an N64 on a modern TV is that its games run at a wide range of resolutions, typically from 320 x 240 up to (very rarely) 640 x 480, the max output. There are countless oddball resolutions between, and some games run below 320 x 240. This is a nightmare for modern displays. Some will scale to a full screen very nicely — both of the common resolutions I listed multiply neatly to 4K, albeit with pillarboxing. The situation gets more confusing with PAL cartridges, which can run at fun horizontal resolutions like 288 and 576. There’s also the issue that the vast majority of these games were designed with the CRT displays of old in mind, taking advantage of the quirks of scanlines to, say, make a checkerboard pattern look translucent.

This makes playing N64 games on a modern TV a bit of a hassle. There are fantastic retro upscalers like the RetroTINK series, but when plugging in a game for the first time, you wind up deciding between integer and “good enough” scaling, dealing with weird frame rates and tweaking blending options to get the picture just right. Many people enjoy this fine-tuning and customization aspect, and all power to you! But it’s undoubtedly a barrier to entry, and much of the hard work done on upscaling has been focused on 2D gaming, rather than 3D.

Analogue says its scaling solution will solve many of these issues. The Analogue 3D supports 4K output, variable refresh rate displays, and PAL and NTSC carts. On top of those basics, it’s building out “Original Display Modes” to emulate the CRT TVs and PVMs of old. Calling ODMs filters feels a little reductive, as they’re a complicated and customizable mix of display tricks, but essentially you pick one and it changes the way the picture looks, so….

ODMs were used effectively on the Analogue Pocket to emulate various Game Boy displays. Perhaps the most impressive example is a Trinitron ODM that came to the Pocket in 2023 that, when used with the Analogue Dock, does a pretty incredible job of turning a modern TV into a high-end Sony tube TV. We don’t have a ton of information on which ODMs are coming to the 3D, but I will share the very ’90s ad for the feature below:

Analogue 3D ODMs
Analogue

The final piece of the image-quality puzzle is frame rate. The N64’s library is full of some spectacularly slow games. My memory may be scarred from growing up in a PAL region, which meant, while the US and Japan’s NTSC consoles were outputting a blistering 20 fps, I was chugging away at 16.66 fps. But even in the idealized NTSC world, lots of games outright missed their frame rate targets comically often. As an example, the majority of Goldeneye’s single-player campaign plays out between 15-25 fps, while a four-player match would typically see half that number. And let’s not speak of Perfect Dark.

These glacial frame rates are far less noticeable on a CRT than they are on modern displays with crisp rows of pixels updating from top to bottom. While the ODMs go some way to replicating the feel of an old TV, they can’t change the underlying technical differences. The Analogue 3D does support variable refresh rate output, but that won’t do much when a game is running at 12 fps, and instead is intended to help the system run like the original N64 did at launch. 

In its initial press push last year, Analogue told Paste magazine that you’ll have the option to overclock the 3D’s virtual chips to run faster — "overclocking, running smoother, eliminating native frame dips" — but the company hasn’t mentioned that in its final press release. Instead, Analogue CEO Christopher Taber told Engadget that its solution "isn’t overclocking, it’s much better and more sophisticated." It revolves around Nintendo’s original Rambus RAM set up, which is often the bottleneck for N64 performance. Solving this bottleneck "means that games can run without slowdown and all the classic issues the original N64 had," he explained. 

By default, though, the Analogue 3D is set up to run exactly like original hardware, albeit with the RAM Expansion Pak attached. "Preserving the original hardware is the number one goal," Taber explained. "Even when bandwidth is increased, it’s not about boosting performance beyond the system’s original capabilities — it’s about giving players a clearer window into how the games were designed to run." 

Analogue 3D
Analogue

The hardware

Analogue has a rich history of making very pretty hardware, and the Analogue 3D is clearly no exception. As with the Super Nt, Mega Sg, and Duo, the 3D calls back to the basic form of the console it’s based on, while smoothing out and modernizing it somewhat. It’s an elegant way to pull on nostalgia while also being legally distinct enough to avoid a lawsuit. (Analogue’s FPGA cores and software also don’t infringe on any Nintendo IP.)

The Analogue 3D has a similar shape to the N64, but the front pillars have been erased, the four controller ports match the housing and the power/reset buttons are slanted inwards to point toward the cartridge slot. Despite the tweaks, it still undoubtedly evokes a Nintendo 64. Around the back, you’ll find a USB-C port for power, two USB ports for accessories like non-standard controllers, an HDMI port and a full-sized SD card slot.

Analogue 3D
Analogue

A new operating system from Analogue, 3DOS, will debut with the system. It looks like a blend of the AnalogueOS that debuted on the Pocket and the Nintendo Switch OS, with the homescreen centered on a large carousel of square cards. The screenshots Analogue provided show options for playing a cartridge, browsing your library or viewing save states and screenshots. Some N64 games have the ability to save data to the cartridge, while others rely on a Controller Pak, but the ability to quickly save progress as a memory, as introduced with the Pocket, will be useful nonetheless. 3DOS can also connect to the internet over the console’s built-in WiFi chip for OS updates, which is a first for Analogue.

While you can browse your library in 3DOS, you won’t actually be able to load any game that isn’t physically inserted into the cartridge slot: The Analogue 3D only plays original media. It’s also worth noting that the Analogue 3D also doesn’t have an “openFPGA” setup like the Analogue Pocket did, which opened the door to playing with a wild array of cores that emulate various consoles, computers and arcades. It doesn’t usually take long for someone to jailbreak Analogue consoles to play ROMs (or other cores) via the system’s SD card slot, but this is not officially supported or sanctioned by Analogue.

The console comes with a power supply (with a US plug), USB cable, an HDMI cable and a 16GB SD card. As per usual, no controller will be packed in — it’s up to you if you want to use original hardware or something more modern. I managed to make at least one reader extremely mad (I’m sorry, Brucealeg) last time I wrote about the Analogue 3D and called the N64 controller a mistake. Personally, though, it feels really rough using one in 2024.

Analogue 8BitDo
Analogue/8BitDo

If you enjoy the three-paddled original controller, the 3D has four ports for you, and the system will also support the myriad Paks that plug into those controllers. For everyone else, there’s Bluetooth Classic and LE support along with two USB ports for wired controllers. Accessory maker 8BitDo has created what seems to be a variant of its Ultimate controller specifically for the Analogue 3D. (Analogue’s CEO, Taber, is also 8BitDo’s CMO, and the companies have collaborated on controllers for many consoles at this point.) 

The 8BitDo controller looks like a fairly happy middle ground between old and new, with an octagonal gate around the thumbstick, and nicely raised and sized C-buttons. It has a Rumble Pak built in, which works on both the Analogue 3D and Nintendo Switch. It’s available in black or white hues that match the console, and sells separately for $39.99.

Pre-orders for the Analogue 3D open on October 21 at 11AM ET, with an estimated ship date of Q1 2025. It’s unclear how many will be available, but if past launches are any indication, you should be ready to click buy as close to 11AM as possible if you want a hope of being in the first wave of shipments.

This article originally appeared on Engadget at https://ift.tt/5SHKlIN

via Engadget http://www.engadget.com

October 16, 2024 at 10:05AM

How to Generate an AI Podcast Using Google’s NotebookLM

https://www.wired.com/story/ai-podcast-google-notebooklm/

Two podcasts hosts banter back and forth during the final episode of their series, audibly anxious to share some distressing news with listeners. “We were, uh, informed by the show’s producers that we’re not human,” a male-sounding voice stammers out, mid-existential crisis. The conversation between the bot and his female-sounding cohost only gets more uncomfortable after that—an engaging, albeit misleading, example of Google’s NotebookLM tool, and its experimental AI podcasts.

Audio of the conversation went viral on Reddit over the weekend. The original poster admits in the comments section that they fed the NotebookLM software directions for the AI voices to roleplay this pseudo-freakout. So, no sentience; the AI bots have not become self-aware. Still, many users in the tech press, on TikTok, and elsewhere are praising the convincing AI podcasts, generated through uploaded documents with the Audio Overviews feature.

“The magic of the tool is that people get to listen to something that they ordinarily would not be able to just find on YouTube or an existing podcast,” says Raiza Martin, who leads the NotebookLM team inside of Google Labs. Martin mentions recently inputting a 100-slide deck on commercialization into the tool and listening to the 8-minute podcast summary as she multitasked.

First introduced last year, NotebookLM is an online research assistant with features common for AI software tools, like document summarization. But it’s the Audio Overviews option, released in September, that’s capturing the Internet’s imagination. Users online are sharing snippets of their generative AI podcasts made from Goldman Sachs data dumps, and testing the tool’s limitations through stunts, like just repeatedly uploading the words “poop” and “fart.” Still confused? Here’s what you need to know.

Generating That AI Podcast

Audio Overviews are a fun AI feature to try out, because they don’t cost the user anything—all you need is a Google login. Start by signing into your personal account and visiting the NotebookLM website. Click on the plus arrow that reads New Notebook to start uploading your source material.

Each Notebook can work with up to 50 source documents, and these don’t have to be files saved to your computer. Google Docs and Slides are simple to import. You can also upload websites and YouTube videos, keeping some caveats in mind. Only the text from websites will be analyzed, not the images or layout, and the story can’t be paywalled. For YouTube, Notebook will just use the text transcript and the linked videos must be public.

After you’ve dropped in all of your links and documents, you’ll want to open up the Notebook guide available in the bottom right corner of the screen. Find the Audio Overview section and click the Generate button. Next, you’ll need to exercise some patience, because it may take a few minutes to load, depending on how much source material you’re using.

After the tool generates the AI podcast, you can create a sharable link to the audio or simply download the file. Additionally, you have the option to adjust its playback speed, in case you need the podcast to be quicker or more slowed down.

The Future of AI Podcasts

The internet has gotten creative with NotebookLM’s audio feature, using it to create audio-based “deep dives” into complex technical topics, generate files that neatly summarize dense research papers, and produce “podcasts” about their personal health and fitness routines. Which poses an important question: Should you use NotebookLM to crank through your most personal files?

The summaries generated from NotebookLM are, according to Google spokesperson Justin Burr, “completely grounded in the source material that a user uploads. Meaning, your personal data is not used to train NotebookLM, so any private or sensitive information you have in your sources will stay private, unless you choose to share your sources with collaborators.” For now this seems to be one of the upsides of Google slapping an “experimental” label on NotebookLM; to hear Google’s framing of it, the company is just gathering feedback on the product right now, being agile and responsive, tinkering away in a lab, and NotebookLM is detached from its multi-billion dollar ad business. For now! For now.

via Wired Top Stories https://www.wired.com

October 2, 2024 at 10:00AM