If you’re finally upgrading your PC, you’re likely excited to have speedier performance and increased storage. However, you can’t forget to delete all of your old device’s contents, especially if you’re giving it away or reselling it. There’s no need to manually wipe your laptop when you can let this Windows-friendly data shredder stick take care of it. Grab it and wipe full drives or just specific files while it’s less than $30.
What’s the difference between shredding and deleting your media? Shredding overwrites your old data, making it impossible to recover once done. You’ll have greater peace of mind knowing your personal information and files won’t fall into the wrong hands.
It’s super easy to use this gadget, too. Here’s how it works:
Plug your data shredder stick into your PC and run the app.
Want to wipe all deleted items from your entire drive? Select the drive and have them shredded for good.
You can shred on countless devices as many times as you’d like, whether you’re clearing out a device for resale or simply want to remove duplicate files from your PC.
Seamless and effective data wipes are possible. Grab this PC-compatible data shredder stick, now available at the unbeatable price of $29.99 while supplies last.
Data Shredder Stick Secure Data Wiping Tool for Windows
The private space company Rocket Lab is on track to launch the first of its new reusable Neutron Rocket in the second half of 2025 and will eventually land them at sea, the company revealed.
Rocket Lab founder and CEO Peter Beck shared updates on Neutron during the company’s Feb. 26 earnings call, saying its Neutron rocket will address the growing demand for launch services from defense, security, and science communities.
"We’re working hard to bring Neutron online with one of the fastest development schedules in history for a new rocket, because we know medium-lift launch opportunities are limited and space access is being stifled," Beck said in a statement. "Neutron’s debut launch planned for later this year will help to ease that bottleneck."
Additionally, Rocket Lab has unveiled a plan to modify an offshore barge, which they’ve named "Return on Investment." The modified barge will act as an ocean landing platform for returning Neutron missions.
"Our new landing platform will open space access even further by enabling even more mission opportunities that require maximum Neutron performance," Beck said.
Rocket Lab’s design for its new "Flatellite" satellite bus in Earth orbit. (Image credit: Rocket Lab)
Rocket Lab also introduced a new satellite product called "Flatellite," a flat satellite that the company says can be mass produced and tailored for large satellite constellations.
The satellites’ flat shape allows them to be stacked for a launch. Because the satellites can stack, Rocket Lab says this will maximize the number of satellites per launch, with a seamless integration with their Neutron Rocket.
Get the Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
"The industry is hungry for versatile satellites that are affordable and built fast in high volumes," Beck said in a statement. "This is why we created Flatellite."
Rocket Lab’s plan for its Flatellite satellite product will be to launch them in stacks for constellation batch launches. (Image credit: Rocket Lab)
The founder called the new satellites, "a bold, strategic move toward completing the final step in Rocket Lab’s ultimate vision of being a truly end-to-end space company, operating its own constellation and delivering services from space."
Rocket Lab currently launches missions using their Electron, a two-stage launch vehicle for small satellites. Institute for Q-shu Pioneers of Space (iQPS), a Japanese satellite company, will use Electron for eight missions over 2025 and 2026. Rocket Lab announced the newest contract they inked with iQPS last week, as well.
"Electron’s high launch frequency and reliability make it an ideal choice for our mission," iQPS CEO Dr. Shunsuke Onishi said in a statement. "This contract brings us one step closer to building our satellite constellation over the next two years, and we remain fully committed to making this vision a reality."
Rocket Lab says the next Electron mission for iQPS is scheduled as soon as this month.
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.
Microplastics. They’re in the soil; they’re in the ocean; and they’re even in the air, poised to invade our respiratory systems and to harm our health. But how, exactly, do they make their way into the atmosphere?
Some studies have suggested that these tiny pieces of plastic — at most around 5 millimeters across — take to the air from the ocean. Ocean spray shoots them into the atmosphere, these studies say, positioning these minuscule pollutants to enter our bodies when we breathe. But a new paper published in npj Climate and Atmospheric Science suggests that the ocean may absorb more airborne microplastics than it introduces.
“The ocean functions more as a sink than a source,” the authors stated in the study. “This challenges the previous view of the ocean as the primary atmospheric microplastic source, urging a reassessment of pollution mitigation strategies.”
Most microplastics are made on land, where larger plastic debris degrades into tinier and tinier pieces. From there, these small particles of plastic are washed into waterways and transported to the ocean, where they are then shot into the air. Indeed, some studies have suggested that sea spray and waves send microplastics into the atmosphere by way of the air bubbles that they create.
In these studies, observations of airborne microplastics around the world seemed to indicate that the ocean introduced millions (or even billions) of kilograms of tiny pollutants into the air each year. While subsequent science lowered that estimate to thousands of kilograms, the authors of the npj Climate and Atmospheric Science study sought to take another look at the ocean and its impact on airborne microplastics to determine whether the sea is actually as significant a source of the pollution in the air as these previous papers say.
Creating a chemical transport model (a computer simulation that mimics the movement of atmospheric components in and from the air), the authors of the new study saw that the ocean is not a substantial source of atmospheric microplastics but is, instead, a substantial sink.
“Although the ocean contributes only about 0.008 percent as a source of atmospheric microplastics, it plays a crucial role as a sink,” the study authors wrote. In fact, they found that the ocean actually absorbs about 15 percent of atmospheric microplastics.
The study authors also found that the size of microplastic particles modulates their movement through the world’s atmosphere. The more sizable the particle, the more quickly it settles, with some of the smallest microplastics staying airborne for as long as a year, floating all around the world, from one part of the globe to another.
The introduction of tiny plastic particles into the atmosphere is an important process to untangle since their presence poses a significant threat to our health — particularly our respiratory health. According to a 2024 review of some 3,000 studies inEnvironmental Science & Technology, these threats include infertility, pulmonary inflammation, and poor pulmonary function, the second of which is tied to an increased risk of lung cancer.
Ultimately, the npj Climate and Atmospheric Science study shifts the focus from oceanic sources of airborne microplastics to terrestrial ones, suggesting that it is the creation of microplastics on land that future mitigation strategies should address first and foremost.
“Effective mitigation of microplastic-related risks for human health and ecosystems hinges on a comprehensive understanding of atmospheric microplastic dynamics,” the study authors concluded in their study, perhaps pointing the way to a future with fewer microplastics floating around.
Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:
Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.
If you’ve purchased a laptop or tablet with an AMD Ryzen chip inside, there’s a performance tweak you absolutely need to know about. Apply it to AMD’s AI/gaming powerhouse Ryzen AI Max chip, and you can score enormous performance gains in just seconds!
Savvy gamers know instinctively that you can boost your game’s frame rate by lowering the resolution or the visual quality, or by making an adjustment to the Windows power-performance slider. But the Ryzen AI Max is a new kind of device: a killer mobile processor that can run modern games at elevated frame rates, and serve as an AI powerhouse.
What’s the secret? A simple adjustment of the Ryzen AI Max’s unified frame buffer, or available graphics memory. While it’s a simple fix, in my tests, it made an enormous difference: up to a 60 percent performance boost in some cases.
When I was comparing Intel’s “Arrow Lake” mobile processor to AMD’s Ryzen AI 300 processor, I was unable to perform certain AI tests because of the memory and VRAM requirements those tests required. After I published the story, AMD reached out to suggest dialing up the VRAM via the laptop BIOS to enable the test to run. (As it turned out, this didn’t make a difference in that specific test.)
When Asus ROG provided me an Asus ROG Flow Z13 gaming tablet for testing AMD’s Ryzen AI Max+ chip, the topic surfaced again. For whatever reason, Asus had configured its tablet with just 4GB of available video memory. The company recommended we test with 8GB instead. It seemed to be an opportune time to see what effects adjusting the Ryzen AI Max’s graphics memory would have.
Why does video memory matter?
VRAM stands for video RAM. If you use a discrete GPU, the amount of VRAM is predetermined by your graphics-card manufacturer, and the VRAM chips are soldered right onto the board. Some people use “VRAM” as shorthand when talking about integrated graphics, as well. That’s not entirely accurate; integrated graphics share memory between the PC’s main memory and the video logic, and the “VRAM” in this case is known as the “UMA frame buffer.”
VRAM (and the UMA frame buffer) stores the textures used by a PC game inside your graphics card, allowing them to be quickly accessed and rendered upon the screen. If the texture size exceeds the amount of VRAM, your PC must pull them in from somewhere else (RAM or the SSD), slowing down gameplay.
In the case of AI, the VRAM acts a lot like standard PC RAM, storing the weights and models of an AI algorithm, and allowing it to run. In many cases, insufficient VRAM means that a particular LLM might not run. In both cases, though, the amount of memory available to the GPU matters.
If you own a discrete graphics card from any manufacturer, the amount of VRAM can’t be adjusted. On Intel Core systems, the available UMA frame buffer is also typically fixed, to half of the available system memory. In other words, if you have an Intel Arrow Lake laptop with 16GB of total memory, up to 8GB is accessible by the integrated GPU. And yes, adding more system memory increases the size of the UMA frame buffer. However, you still can’t adjust the UMA frame buffer’s size to your own preferences.
The “front page” of the BIOS utility on our test tablet with the AMD Ryzen AI Max+ chip installed.
Mark Hachman / Foundry
AMD, on the other hand, typically allows you to adjust the size of the UMA frame buffer on its Ryzen processors (and not just the AI Max!), typically through adjustments made to the firmware. In my case, I obtained access to the UEFI/BIOS just by rebooting the tablet and tapping the F2 key as it rebooted.
On the ROG tablet, I found the adjustment in the advanced settings. Several options were available, from an “Auto” setting to 24GB, with several increments in between. As I noted above, Asus had accidentally sent out its tablets with the 4GB setting enabled. All I did was select a new size for the frame buffer, save, and the boot cycle continued.
Adjustments to the UMA frame buffer were on the “advanced” page.
Mark Hachman / Foundry
Warning: If you own a handheld PC like a Steam Deck, adjusting the UMA frame buffer to improve game performance isn’t that unusual. Making adjustments to the BIOS/UEFI does carry some risk, however. In this case, allocating too much memory to the GPU might not allow an application to run at all, or cause instability. You’re probably safe leaving 8GB reserved for main memory.
If you don’t want to fiddle around in your laptop’s BIOS, you might find that your laptop or tablet ships with the AMD Adrenalin utility. In this case, you might be able to adjust the graphics memory from within the application itself. I found it within the Performance > Tuning tab. If you’re confused about how much memory you’re allocating to graphics and how much is left for your PC, the utility helps make that clear.
Mark Hachman / Foundry
Tested: Does adjusting the UMA frame buffer make a difference?
I’m currently in the process of reviewing the Asus ROG Flow Z13 and its AMD Ryzen AI Max+ chip, but I paused to check to see what improvements, if any, could be made by adjusting the size of the UMA frame buffer. In general, I found that adjusting it made the most improvement inside games and especially AI.
I didn’t see enormous gains in synthetic graphics benchmarks: Tweaking the UMA frame buffer using 3DMark’s Steel Nomad test boosted scores from 1,643 to 1,825, or about 11 percent. Ditto for PCMark 10, which measures productivity: The needle barely moved. I turned to Cyberpunk: 2077 to test how adjusting the UMA frame buffer applied to games.
I ran four benchmarks, each using a UMA frame buffer of 4GB, 8GB, or 16GB. The idea was to see what the chip could do on its own, just rasterizing the game, as well as turning on all of the GPU’s AI enhancements to maximize frame rate:
1080p resolution, Ultra settings, with all enhancements turned off
1080p resolution, Ultra settings, with all enhancements (AMD FSR 3 scaling and frame generation) turned on
1080p resolution, Ray Tracing Overdrive settings, with all enhancements turned off
1080p resolution, Ray Tracing Overdrive settings, with all enhancements turned on
The sharpest improvement from increasing the frame buffer size came in the 1080p Ultra settings.
Mark Hachman / Foundry
You can immediately see that the largest improvements in overall frame rate simply come from turning on AMD’s FidelityFX Super Sampling 3 (FSR 3). But adjusting the frame buffer gives you about a 10 percent boost in the Ray Tracing Overdrive mode. More importantly, it bumped up the frame rate on the 1080p Ultra settings, with FSR 3 on, by 17 percent. That’s all from a free, easy adjustment that costs you nothing.
It’s worth pointing out that adjusting the UMA frame buffer doesn’t always scale. It seems to under the Ray Tracing Overdrive setting, but the same gains don’t play out elsewhere. I’ve found Cyberpunk‘s tests to be quite reproducible, so I’m inclined to believe that it’s not just statistical variance.
I saw the same thing when testing AI, too. I used UL Procyon’s AI Text Generation benchmark, which loads in four separate LLMs or AI chatbots, and asks a series of questions. UL generates its own scores, based on speed and token output. Here, there was an amazing jump in overall performance in places: 64 percent in Llama 2 just by dialing up the frame buffer! But it was also interesting that the performance increase was “capped” at 16GB. Selecting a 24GB frame buffer actually lowered the performance slightly.
Adjusting the frame buffer has some significant benefits in AI…but they don’t scale forever.
Mark Hachman / Foundry
(It might be worth noting that Procyon didn’t think it could run on the Ryzen AI Max, because the system didn’t report enough VRAM. It ran just fine, of course.)
I also checked my work using MLCommons’ own client benchmark, which uses a 7 billion-parameter LLM to generate “scores” of tokens per second and an initial “time to first token.” I didn’t see as much improvement — just about 10 percent from the 4GB to 8GB buffer, and then virtually nothing when I tested it with a 16GB buffer instead.
Integrated graphics matters again
Until now, adjusting the UMA frame buffer was largely irrelevant: Integrated GPUs were simply not powerful enough for that tweak to matter. Things are different now.
Again, if you’re a handheld PC user, you may have seen others adjusting their UMA frame buffer to eke out a bit more performance. But as my colleague Adam Patrick Murray commented, the Ryzen AI Max+ processor inside the Asus ROG Flow Z13 tablet is an odd hybrid of a handheld and a tablet. It’s more than that, of course, given that companies like Framework are putting it in small form-factor PCs.
But lessons learned from the handheld PC space do apply — and well! — to this new class of AI powerhouse chips. Adjusting the UMA frame buffer on an AMD Ryzen AI Max+ chip can be a terrific free tweak that can boost your performance by dramatic amounts.
Discord has become a central artery for gaming. It’s the default place for companies to build communities around their games and for the people who play them to hang out with one another, coordinate gaming sessions, or just shoot the shit about whatever’s going on in life. It’s one of the least shitty social media platforms out there right now, but it might not stay that way for long.
The New York Times reports that Discord’s founders are meeting with investment bankers in preparation for a possible IPO as early as this year. While plans can always change, what that means right now is that the meeting place for 200 million monthly users, over 90 percent of whom use it for gaming, is exploring going from a private company to a public one ruled by the founding principle of the stock market: number must go up.
“We understand there is a lot of interest around Discord’s future plans, but we do not comment on rumors or speculation,” a spokesperson for the company told The New York Times. “Our focus remains on delivering the best possible experience for our users and building a strong, sustainable business.”
Discord was released in 2015 by founders Jason Citron and Stanislav Vishnevskiy. After shipping a MOBA royale on iPad called Fates Forever in 2014 that never took off, they pivoted to making communications platform for online gamers. Online games were exploding at the time and the tools for communicating in them were pretty bad. Discord filled the void when it came to coordinating raids in Final Fantasy XIV or ganks in Dota 2.
It eventually became a one-stop shop for gaming conversations online, offering an alternative to boring, stilted workplace productivity tools like Skype and Slack, as well as archaic gaming forums, and fire-hose social media platforms like Reddit and Twitter. Discord is now fully-integrated on Xbox Series X and PlayStation 5, and serves as a better solution to Nintendo Switch’s terrible voice chat app, and the frustrating hiccups of cross-platform party chat. Powering this takeover has been the fact 1) Discord works and 2) the majority of its features are entirely free.
Going public could change all of that. We’ve seen the playbook before. Tech company woos hundreds of millions of users with a great product that’s practically free, and then once they’re locked into the platform through history, network effects, or just old habit, squeezes as much financial value out of the thing by making it worse in almost every way possible. If this were A Christmas Carol and Discord was Ebenezer Scrooge then Facebook would be the Jacob Marley shaking its chains in the chat app’s face.
This is what the author Cory Doctorow has dubbed enshittification. Online products and services get worse over time as the companies behind them are forced to extract more and more value no matter the long-terms costs. Being private doesn’t make a company immune from this, but going public certainly has the potential to be like throwing TNT into a dumpster fire. This shift was already evident in Discord—see its embrace of online advertising just last year—but seems only likely to worsen with a potential IPO payout looming overhead. Look no farther then Reddit’s road to going public, paved with generative-AI deals and API price hikes.
The increasing enshittification of Discord is even more worrying. It’s now the defacto town square for many of the biggest games around, from Marvel Rivals to Helldivers 2. Rockstar Games just launched its own Discord channel in preparation for Grand Theft Auto VI, likely to be the biggest game launch ever when it arrives later this year. The price of exiting the platform if features get worse or locked behind a paywall is only going up. And any communities that do leave will have to leave their pasts behind, sealed off like The Cask of Amontillado from the rest of the internet. Discord’s separation from that crumbling infrastructure has long been one of its greatest virtues, making it feel like a relative safe heaven from SEO spam and algorithmic churn. It could one day make it feel more like a tomb.
High on a pandemic-fueled influx of new users, Discord once flirted with growing beyond its gaming roots. There was talk of it supplanting Slack and of Microsoft buying it for a potential $6.5 billion. Then last year it promised it was getting back to basics and doubling down on its original core mission. “We believe Discord can play a uniquely important role in the future of gaming,” Citron wrote at the time. “We’re going to focus on making it easier and more fun for people to talk and hang out before, during, and after playing—and we’ll help developers of those games bring their creativity to life.”
Tens of thousands of cuneiform tablets are sitting around, just waiting to be translated. It’s not an easy job; the ancient language is based on wedge-shaped pictograms and includes more than 1,000 unique characters that vary by era, geography, and individual writer.
But decoding the pictograms could be a culturally and historically significant task. Cuneiform arose about 5,000 years ago in Mesopotamia, in what is now Iraq. It is one of four known pristine languages — writing systems with no known influences from any other. Some translated cuneiform tablets have revealed contents as banal as a record of inventory for shipping. Others have been more profound — like the “Epic of Gilgamesh," the first known written work of literature.
Those translations, done by a relatively few individuals who know the language, required a lot of labor — and perhaps some guesswork. Decoding such complexity would be the perfect job for artificial intelligence, thought some Cornell University researchers, who, with colleagues at Tel Aviv University, created a system to do just that, they report in a paper to be presented at an April 2025 conference.
AI Deciphers Ancient Tablets
The research team developed a system that overcomes the many obstacles that variations present to translation.
“When you go back to the ancient world, there’s a huge variability in the character forms,” Hadar Averbuch-Elor, a Cornell computer science professor who led the research, said in a press release. “Even with the same character, the appearance changes across time, and so it’s a very challenging problem to be able to automatically decipher what the character actually means.”
The computer system reads photographs of clay cuneiform tablets, then adjusts by computationally overlaying the images atop ones with similar features, and whose meaning is known. Because the system automatically aligns the two images until they digitally click into place, they named the system ProtoSnap.
In the paper, the researchers demonstrated that the snapped characters can be used to train the system to see other similarities between other characters later in the process, what they call downstream. When the system received such training, ProtoSnap performed much better at recognizing cuneiform characters — even rare ones or characters with lots of differences — than previous AI efforts.
This advance could help automate the tablet-reading process. This would save an enormous amount of time. It could also help scholars better compare writings from different times, cities, and authors. But most importantly, it would dramatically hasten the translation process — ultimately giving the world access to an abundance of ancient writing.
“The base of our research is the aim to increase the ancient sources available to us by tenfold,” Yoram Cohen, a co-author and archaeology professor at TAU said in the press release. “This will allow us, for the first time, the manipulation of big data, leading to new measurable insights about ancient societies – their religion, economy, social and legal life.”
Although many translated tablets will likely just show, say, a receipt for a livestock purchase, others could contain fascinating historical accounts — or even another epic poem.
ArticleSources
Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:
Before joining Discover Magazine, Paul Smaglik spent over 20 years as a science journalist, specializing in U.S. life science policy and global scientific career issues. He began his career in newspapers, but switched to scientific magazines. His work has appeared in publications including Science News, Science, Nature, and Scientific American.
Toyota has long been known for its innovative approach to automotive technology and its commitment to providing drivers with engaging and reliable vehicles. Yet, when it comes to electric vehicles, the automaker has been more cautious than some of its competitors, focusing heavily on hybrids and hydrogen fuel cells. Although it is famous for its line of trucks and SUVs, the brand has also worked hard to bring more engaging gas-powered vehicles into its lineup, including the GR86, GR Supra, and GR Corolla, all of which have manual transmissions.
Lexus UX 300e
Toyota
In 2023, Toyota developed a simulated manual transmission for electric vehicles that provides a driving experience similar to that of a gas-powered automobile. They even put the revolutionary system in a Lexus UX 300e, and it did exactly what it was intended to do. This kind of system would be a first for production EVs, offering a rare blend of electric efficiency and old-school driving involvement. But when will Toyota actually build an EV with this unique and potentially game-changing feature?
A manual “shifting” transmission for a single-speed electric setup?
Toyota’s simulated manual transmission has a clutch pedal, a six-speed manual shifter, synthetic engine sounds, and even the ability to stall the engine. If you want convenience and comfort, you can even drive it in regular, boring EV mode. The simulated manual transmission Toyota developed for EVs isn’t just a novelty—it’s an effort to preserve the engaging driving experience that many enthusiasts associate with stick-shift cars.
Toyota EV shifter
Toyota
Traditional EVs are typically single-speed, using the instant torque of electric motors to deliver seamless acceleration without the need for gear changes. While this makes them efficient and smooth, it also removes the tactile, interactive element of shifting gears — something that many driving purists miss.
Toyota’s simulated manual works via software, which allows the car to shake if the driver fails to depress the accelerator enough, shift into the wrong gear, or mishandle the clutch pedal. The six-speed shift gear shift is embedded in an old-school H-gate with microswitches in each gear position but no physical connection to any shift rods, syncros, etc. The clutch pedal utilizes a return spring for feel. The gearshift lever and the clutch aren’t mechanically connected to anything. This is a brilliant and innovative system designed to make the driving experience as authentic as possible.
What is Toyota’s plan with the revolutionary transmission?
Toyota’s electric vehicle strategy has been deliberate and measured as evidenced by its single EV offering, the bZ4x. The company has invested heavily in hybrid technology and has been a strong proponent of hydrogen fuel-cell vehicles like the Mirai. That said, Toyota has been criticized for lagging behind in the all-electric space, especially as automakers like Tesla, Hyundai, and Ford push forward with ambitious EV lineups.
Toyota FT-Se
Toyota
In December 2021, Toyota announced plans to release 30 new EV models by 2030, with the aim of selling 3.5 million electric vehicles annually by the end of the decade. They have since scaled back their EV plans, and rightly so given the slowing demand for electric vehicles. Toyota now expects to produce around 1 million EVs by 2026, down from the previous 1.5 million target. But as for when the simulated manual transmission will debut, Toyota has been tight-lipped.
There was the possibility of integrating this new transmission into future electric sports cars, like a potential EV successor to the Toyota GR86 or Supra, but now it appears that both models’ successors will probably have gas engines with mild hybrid assist.
Reports from Toyota insiders suggest that the company could launch a prototype featuring the simulated manual transmission by 2026, and the most likely candidate is the sporty, all-wheel-drive FT-Se EV concept introduced in 2023. This timeline aligns with Toyota’s broader EV goals and its push to bring more diverse electric options to the market. The use of a driver-centric EV aligns nicely with the company’s increasing focus on sporty vehicles.
As Toyota continues to expand its EV lineup, the prospect of a driver-focused electric sports car with a simulated stick shift is exciting. While we may have to wait a few more years to see it in action, the mere possibility suggests that Toyota is serious about keeping driving fun, even in an electric future.
A simulated manual could differentiate Toyota’s EVs in an increasingly crowded market. As electric cars become more common, features that enhance driving experience — rather than just efficiency and range — will likely become key selling points.
Toyota will need to determine whether a simulated manual transmission in an EV is a niche offering or something that could attract a wider audience by providing a more connected driving experience. Then there’s the question of how well the final product will be executed. To draw enthusiasts, it would have to improve on its mechanical “feel” and how well it mimics a real manual gearbox. If the system feels artificial or gimmicky, it could struggle to win people over.
Love reading Autoblog? Sign up for our weekly newsletter to get exclusive articles, insider insights, and the latest updates delivered right to your inbox. Click here to sign up now!