Did you know there are strict rules for how LEGO bricks can be used? And no, we’re not talking about laws, but official design principles that dictate how sets are built and why some techniques are considered illegal!
LEGO designers follow precise guidelines to ensure every piece fits perfectly, remains sturdy, and can be endlessly reused. These rules prevent bricks from warping, getting permanently stuck, or becoming too fragile over time. Ever wondered why LEGO never tells you to wedge a tile between studs or jam a peg into the wrong hole? It’s all about keeping your bricks in top shape!
This video dives into the fascinating world of LEGO engineering, from why every set starts as a price point to the literal oven test they go through before release. Check it out!
An asteroid discovered late last year is continuing to stir public interest as its odds of striking planet Earth less than eight years from now continue to increase.
Two weeks ago, when Ars first wrote about the asteroid, designated 2024 YR4, NASA’s Center for Near Earth Object Studies estimated a 1.9 percent chance of an impact with Earth in 2032. NASA’s most recent estimate has the likelihood of a strike increasing to 3.2 percent. Now that’s not particularly high, but it’s also not zero.
Naturally the prospect of a large ball of rock tens of meters across striking the planet is a little worrisome. This is large enough to cause localized devastation near its impact site, likely on the order of the Tunguska event of 1908, which leveled some 500 square miles (1,295 square kilometers) of forest in remote Siberia.
To understand why the odds from NASA are changing and whether we should be concerned about 2024 YR4, Ars connected with Robin George Andrews, author of the recently published book How to Kill an Asteroid. Good timing with the publication date, eh?
Science Newsletter
Your weekly roundup of the best stories on health care, the climate crisis, new scientific discoveries, and more. Delivered on Wednesdays.
Ars: Why are the impact odds increasing?
Robin George Andrews: The asteroid’s orbit is not known to a great deal of precision right now, as we only have a limited number of telescopic observations of it. However, even as the rock zips farther away from Earth, certain telescopes are still managing to spy it and extend our knowledge of the asteroid’s orbital arc around the sun. The odds have fluctuated in both directions over the last few weeks, but overall, they have risen; that’s because the amount of uncertainty astronomers have as to its true orbit has shrunk, but Earth has yet to completely fall out of that zone of uncertainty. As a proportion of the remaining uncertainty, Earth is taking up more space, so for now, its odds are rising.
Think of it like a beam of light coming out of the front of that asteroid. That beam of light shrinks as we get to know its orbit better, but if Earth is yet to fall out of that beam, it takes up proportionally more space. So, for a while, the asteroid’s impact odds rise. It’s very likely that, with sufficient observations, Earth will fall out of that shrinking beam of light eventually, and the impact odds will suddenly fall to zero. The alternative, of course, is that they’ll rise close to 100 percent.
What are we learning about the asteroid’s destructive potential?
The damage it could cause would be localized to a roughly city-sized area, so if it hits the middle of the ocean or a vast desert, nothing would happen. But it could trash a city, or completely destroy much of one, with a direct hit.
The key factor here (if you had to pick one) is the asteroid’s mass. Each time the asteroid gets twice as long (presuming it’s roughly spherical), it brings with it 8 times more kinetic energy. So if the asteroid is on the smaller end of the estimated size range—40 meters—then it will be as if a small nuclear bomb exploded in the sky. At that size, unless it’s very iron-rich, it wouldn’t survive its atmospheric plunge, so it would explode in mid-air. There would be modest-to-severe structural damage right below the blast, and minor to moderate structural damage over tens of miles. A 90-meter asteroid would, whether it makes it to the ground or not, be more than 10 times more energetic; a large nuclear weapon blast, then. A large city would be severely damaged, and the area below the blast would be annihilated.
Microsoft has announced that it has created the first ‘topological qubits’ — a way of storing quantum information that the firm hopes will underpin a new generation of quantum computers. Machines based on topology are expected to be easier to build at scale than competing technologies, because they should better protect the information from noise. But some researchers are sceptical of the company’s claims.
The announcement came in a 19 February press release containing few technical details — but Microsoft says it has disclosed some of its data to selected specialists in a meeting at its research centre in Santa Barbara, California. “Would I bet my life that they’re seeing what they think they’re seeing? No, but it looks pretty good,” says Steven Simon, a theoretical physicist at the University of Oxford, UK, who was briefed on the results.
At the same time, the company published intermediate results — but not the proof of the existence of topological qubits — on 19 February in Nature.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Superconducting wire
Topological states are collective states of the electrons in a material that are resistant to noise, much like how two links in a chain can be shifted or rotated around each other while remaining connected.
The Nature paper describes experiments on a superconducting ‘nanowire’ device made of indium arsenide. The ultimate goal is to host two topological states called Majorana quasiparticles, one at each end of the device. Because electrons in a superconductor are paired, an extra, unpaired electron will be introduced, forming an excited state. This electron exists in a ‘delocalized’ state, which is shared between the two Majorana quasiparticles.
The paper reports measurements suggesting that the nanowire does indeed harbour an extra electron. These tests “do not, by themselves” guarantee that the nanowire hosts two Majorana quasiparticles, the authors warn.
According to the press release, the team has carried out follow-up experiments in which they paired two nanowires and put them in a superposition of two states — one with the extra electron in the first nanowire, and the other with the electron in the second nanowire. “We’ve built a qubit and shown that you can not only measure parity in two parallel wires, but a measurement that bridges the two wires,” says Microsoft researcher Chetan Nayak.
“There’s no slam dunk to know immediately from the experiment” that the qubits are made of topological states, says Simon. (A claim of having created Majorana states, made by a Microsoft-funded team based in Delft, the Netherlands, was retracted in 2021.) The ultimate proof will come if the devices perform as expected once they are scaled up, he adds.
Early announcement
Some researchers are critical of the company’s choice to publicly announce the creation of a qubit without releasing detailed evidence. “If you have some new results not connected to this paper, why don’t you wait until you have enough material for a separate publication?" says Daniel Loss, a physicist at the University of Basel, Switzerland. “Without seeing the extra data from the qubit operation, there is not much one can comment,” says Georgios Katsaros, a physicist at the Institute of Science and Technology Austria in Klosterneuburg.
“We are committed to open publication of our research results in a timely manner while also protecting the company’s IP [intellectual property],” says Nayak.
Microsoft has also shared a roadmap for scaling up its topological machines and demonstrating that they can perform quantum calculations2. Vincent Mourik, a physicist at the Helmholtz Research Centre in Jülich, Germany, whose concerns helped to lead to the earlier retraction, is sceptical of the whole concept. “At a fundamental level, the approach of building a quantum computer based on topological Majorana qubits as it is pursed by Microsoft is not going to work.”
“As we perform more types of measurements, it will become harder to explain our results with non-topological models,” says Nayak. “There may not be one single moment when everyone will be convinced. But non-topological explanations will require more and more fine-tuning.”
This article is reproduced with permission and was first published on February 19, 2025.
In the future, Microsoft suggests, you may be playing AI. No, not on the battlefield, but on games that actually use AI to simulate the entire game itself.
As a first step, Microsoft has developed an AI model, called WHAM, that “beta tests” games early in the development cycle using AI instead of human players.
Gamers know that realistic AI can turn a good game into something great, like how the older F.E.A.R. games would realistically model how soldiers might react to a hostile, armed player. Microsoft’s World and Human Action Model (WHAM) takes the opposite approach — it tries to figure out how human players will react in a given situation, right down to a specific frame or setup within the existing game world. Microsoft calls this WHAM by the name “Muse.”
The point of Muse’s WHAM, Microsoft said, wasn’t to improve the way NPCs or in-game monsters necessarily reacted to players. Instead, WHAM was developed to make a game “feel right” — not too hard, not too easy, with interactions that felt realistic. That’s something that normally takes hours upon hours of beta testing and evaluating how gamers interact with the environment. WHAM was designed to help automate that, the company said.
Simulating video games with Muse’s WHAM
Microsoft said Wednesday that it has released the WHAM model to huggingface.com, alongside a “WHAM Demonstrator” to essentially place the AI player in a specific spawn location, and then test and evaluate what would happen if the AI made different decisions. Microsoft also published a paper describing WHAM to the Nature scientific journal, which was made available to PCWorld before publication.
To develop the model, Microsoft used about 500,000 anonymized gaming sessions (over all seven of the game’s maps) from Ninja Theory’s Bleeding Edge, a 4×4 multiplayer combat game that Ninja Theory released in 2020 but halted development on less than a year later. Each frame of the session was reduced to 300×180 resolution, then encoded into 540 AI tokens. Likewise, each motion on the Xbox controller, including the buttons, was reduced to 16 different inputs based on the stick direction and button input.
Microsoft said that the GIF below was generated by the Muse WHAM.
Microsoft
Microsoft encoded all of this gameplay into a 1.6-billion parameter model, condensing essentially seven entire years of gameplay into a single transformer. The company also developed smaller models based upon a single map, Skygarden, with 128×128 images used instead, with parameters ranging from 15 million to 894 million. (In AI, a larger number of parameters usually generates more realistic outcomes, at the cost of additional computing resources.)
Microsoft then built a concept prototype, known as the “WHAM Demonstrator” — sort of the AI chatbot based upon the WHAM model. In this case, the user was able to “place” the AI player upon a map, in relation to various objects around it. When enabled, the WHAM Demonstrator then sketched out how the “human” player was likely to respond. In this case, the developer could run and then re-run the Demonstrator to see various outcomes, then select an outcome to continue to see how the AI “human” would respond.
Microsoft’s Muse WHAM demonstration shows how the model can begin at the same frame and then end up in different places depending upon what decisions the AI makes.
Microsoft
From its training, Demonstrator understood the gameplay rules and physics, though it took more training iterations to understand that some players could achieve flight, depending upon game conditions.
The idea is that the WHAM Demonstrator could be used to run different scenarios from the same starting point. In the Nature paper, Microsoft showed how WHAM, beginning with the same eight frames, could produce 16 widely divergent endpoints, based on the AI decisions that WHAM made. Even more interestingly, WHAM was developed so that users could add additional enemies or objects, and the AI would react accordingly.
Microsoft says that its Muse WHAM model is sophisticated enough to react appropriately to changes made, such as injecting another enemy or object.
Microsoft
Forget fake frames: Is the future of gaming entirely AI?
Draw a line through WHAM/Muse into the future, and you arrive at a “game” which is generated more and more in real time using AI. According to Microsoft’s vice president of gaming AI, Fatima Kardar, that’s where Microsoft hopes to go — apparently following Google, which has already demonstrated consistent game worlds from a prompt.
“Today, countless classic games tied to aging hardware are no longer playable by most people,” Kardar said in a statement. “Thanks to this breakthrough, we are exploring the potential for Muse to take older back catalog games from our studios and optimize them for any device. We believe this could radically change how we preserve and experience classic games in the future and make them accessible to more players. To imagine that beloved games lost to time and hardware advancement could one day be played on any screen with Xbox is an exciting possibility for us.”
Microsoft is also exploring the idea of “modding” games using AI, and making those early experiences available to players via Copilot Labs.
Microsoft said, however, that it does not necessarily plan on using AI as part of game development. That will be up to the company’s creative leaders, Kardar said, and any AI work will be shared “earlier on” with players and creators.
The lifespan of data on a USB flash drive depends on many factors: Under ideal conditions, data should remain preserved on a high-quality USB stick for at least 10 years or even longer. But what exactly does that mean and under what conditions does this hold true?
USB sticks or flash drives store data using NAND flash memory, in the form of binary values (zeros and ones) in memory cells. Interestingly, it is electrons trapped in a kind of “floating gate” that represent these values. But these electrons can “leak” over time. This causes the data to degrade because it becomes harder to read whether the charge state represents a one or a zero.
USB sticks are ideal for storing data quickly and easily. For long-term archiving, they bring with them too many confounding variables. Tapes or optical discs are better alternatives.
Kingston
There are several factors that can influence the lifespan of data on a USB drive: The quality of the NAND flash memory plays a role, as does the general workmanship of the stick. Cheaper models usually also have a shorter lifespan. Another factor is the number of write cycles, which describes how often data can be written and deleted.
With an increasing number of write cycles, the probability of data deterioration increases. Extreme temperatures as well as unfavorable storage conditions such as high humidity or dust can also damage the lifespan of your data on the storage medium. If the stick is exposed to high temperatures for a long time, this can cause the electrons to “leak” faster, which can damage the data and lead to its loss.
The “floating gate” has been used as a technique for flash memory for quite a long time. However, due to various conditions, the electrons can “leak” over time, which can lead to data loss.
IDG
All in all, this does not make a USB stick the ideal storage medium for long-term storage of important data — certainly not as the only method. You cannot avoid regular backups on other storage media, such as an external drive. If you really want to back up data over a truly long period of time, you should even consider using archival tapes or optical media.
And remember: It’s never a good idea to store important data in just one place and on just one medium. Flash drives are best for nimble file transfers or for creating bootable media.
Researchers from Linköping University in Sweden believe they may have found a solution that could let solar manufacturers have their cake and eat it too. Using a newly designed recycling technique, the researchers were able to fully break down a perovskite solar cell at the end of its life cycle using only a water solvent. When they used that recycled material to create an entirely new solar cell, they found it maintained the same overall efficiency as the first non-recycled iteration. In theory, this process could be scaled up to help create fully recyclable, energy-efficient solar cells that don’t require environmentally harmful chemicals to break down. More uses of the same solar cells could also help bring down solar energy prices further long term.
“We can recycle everything—covering glasses, electrodes, perovskite layers, and also the charge transport layer,” Linköping University postdoc student and paper co-author Xun Xiao said in a statement. The researchers published thier findings this week in the journal Nature.
Researchers replaced a toxic chemical process with water
Perovskite solar panels are derived from a family of elements that are valued for their high energy retention and low production costs. (These types of cells are able to convert 25% solar energy into electricity compared to 15-20% for most traditional silicon-based cells). The standard approach for dismantling perovskite solar panels for recycling requires soaking them in dimethylformamide, a chemical most commonly found in paint solvents. This approach, the researchers note, isn’t ideal because it leads to potentially hazardous chemicals leaching into the environment.
“We need to take recycling into consideration when developing emerging solar cell technologies,” Linköping University professor and paper coauthor Feng Gao said in a statement. “If we don’t know how to recycle them, maybe we shouldn’t put them on the market at all.”
The researchers took a different approach and opted instead to create a nontoxic, water-based solvent that included sodium acetate, sodium iodide, and hypophosphorous acid additives. Sodium acetate was introduced to help break down the solar cell’s individual materials. Sodium iodide, by contrast, was added to help reform the separated perovskite crystals so that they could be used again later to create a new solar cell. The hypophosphorous acid was included to help keep the solution stable over time. Researchers heated the water to 80 degrees Celsius for 20 minutes before submerging the cell to further aid in the dismantling process. The newly recycled perovskite crystals and remaining liquid were then separated by running them through a centrifuge spinning at 5,000 rpm for three minutes.
With that process complete, the researchers were then able to use that recycled material to create a new solar cell. Crucially, the new cell was just as energy-efficient as the one prior to recycling. The researchers were able to repeat this process several more times without the newer cells losing their energy output. Those findings suggest the researchers’ “eco-friendly” water solution approach could extend the life of next-generation perovskite-based solar panels by several multiples. The researchers estimate their approach reduced overall resource depletion by 96.6% compared to fresh solar panels tossed in a landfill after one life cycle.
While it’s still not completely clear how this water-based recycling approach will fire when ramped up to a large industrial scale, the water method offers a possible avenue to make future renewable energy infrastructure more sustainable. The findings come at a crucial moment. Soaring international electricity demands jolted forward by massive, power-hungry AI data centers means the world will need to find a way to quickly generate new energy. Though much of that demand will likely be met by fossil fuels, highly recyclable solar cells could help drive down solar prices which in turn may make it more financially attractive.
The first humans to Mars might someday ride a rocket propelled by a nuclear reactor to their destination. But before that can happen, nuclear thermal propulsion (NTP) technologies still have quite a way to go before we could blast astronauts through space on a nuclear rocket.
However, earlier this month, General Atomics Electromagnetic Systems (GA-EMS), in collaboration with NASA, achieved an important milestone on the road to using NTP rockets. At NASA’s Marshall Space Flight Center in Alabama, General Atomics tested a new NTP reactor fuel to find out if the fuel could function in the extreme conditions of space.
According to company leadership, the tests showed that the fuel can withstand the harsh conditions of spaceflight. "We’re very encouraged by the positive test results proving the fuel can survive these operational conditions, moving us closer to realizing the potential of safe, reliable nuclear thermal propulsion for cislunar and deep space missions," General Atomics president Scott Forney said in a statement.
To test the fuel, General Atomics took the samples and subjected them to six thermal cycles that used hot hydrogen to rapidly increase the temperature to 2600 degrees Kelvin or 4,220 degrees Fahrenheit. Any nuclear thermal propulsion fuel aboard a spacecraft would have to be able to survive extreme temperatures and exposure to hot hydrogen gas.
To test how the fuel could with stand these conditions, General Atomics conducted additional tests with varying protective features to get further data on how different material enhancements improved the performance of the fuel under conditions similar to that of a nuclear reactor. According to the company, these types of tests were a first.
"To the best of our knowledge, we are the first company to use the compact fuel element environmental test (CFEET) facility at NASA MSFC to successfully test and demonstrate the survivability of fuel after thermal cycling in hydrogen representative temperatures and ramp rates," Christina Back, vice president of General Atomics Nuclear Technologies and Materials, said in the same statement.
NASA and General Atomics tested the fuel by exposing it to temperatures up to 3,000 Kelvin (4,940 Fahrenheit or 2,727 Celsius), finding that it performed well even at temperatures that high. According to Back, this means a NTP system using the fuel could operate two-to-three times more efficiently than current rocket engines.
Get the Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
One of the main reasons why NASA wants to build NTP rockets is that they could be much faster than the rockets we use today, which are propelled by traditional chemical fuel.
A faster transit time could reduce risks for astronauts, as longer trips require more supplies and more robust systems to support the astronauts while they travel to their destination. There is also the issue of radiation; the longer astronauts are in space, the more cosmic radiation they are subjected to. Shorter flight times could reduce these risks, making the possibility of deep space human spaceflight closer to reality.
In 2023, NASA and Defense Advanced Research Projects Agency (DARPA) announced they’re working on a nuclear thermal rocket engine, so that NASA can send a crewed spacecraft to Mars. The agency hopes to launch a demonstration as early as 2027.
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.