OpenAI is reportedly preparing for the launch of Operator sometime this week. Operator is name of its computer-use agent that can complete tasks in a user’s web browser on their behalf. Other companies including Google and Anthropic have been developing similar “agents” in hopes they will be the next major leap towards AI fulfilling its promise of being able to perform tasks currently done by humans.
According to The Information, which first reported on the impending launch, Operator will provide users with suggested prompts in categories like travel and dining and events. Users could, for instance, ask Operator to find a good flight from New York to Maui that would not have them landing too late in the evening. Operator will not complete a transaction—the user will remain in the loop and complete the checkout process.
It is easy to imagine certain ways Operator could be useful. Aging individuals who are not computer savvy could potentially ask Operator to help them send an email, and see it navigate to Gmail and open a compose window for them. Tech savvy people do not need this type of help, but older generators often struggle navigating the web and completing even simple tasks is a challenge. Bots could help in other areas as well, such as in quality-assurance testing where companies need to test that their new websites or services work properly.
So-called “computer use agents” do come with potential risks. We have already seen a startup introduce a web-navigating bot to automate the process of posting marketing spam to Reddit. Bots that take control of the end-user client are able to bypass API limitations meant to block automation. AI startups will need to take some measures to combat abuse, or else websites will become even more flooded with spam than they are today.
These agents like Operator essentially work by taking screenshots of a user’s browser and sending the images back to OpenAI for analysis. Once its models determine the next step necessary to complete a task, a command is sent back to the browser to move and click the mouse on the appropriate target, or type into an input box. It takes advantage of multi-modal technology OpenAI and others have been developing that can interpret multiple forms of input, in this case text and imagery.
The entire promise of a recent crop of AI startups is that they will be able to create an artificial general intelligence (AGI) that can replace humans on most tasks they perform today and make everyone’s lives more efficient. As exponential gains in the performance of language models have slowed, these companies have been looking for new unlocks that will get them there, and computer use agents are one. An artificial intelligence can not truly replace humans until it can physically complete the tasks for them—writing is just part of a task. Bots also need to be able to navigate spreadsheets, watch videos, and more.
After Anthropic released an initial preview of its computer use bot, early testers complained it was half-baked at best, getting stuck in loops when it does not know what to do or forgetting the task and starting to do something else entirely, like looking at pictures of nature on Google Images. It is also slow, and expensive to operate.
Keeping humans in the loop will be essential with a bot that is granted such high-level control and access to critical data. It seems like perhaps computer-use agents will be akin to self-driving cars. Google was able to make a car drive down a straightaway on its own easy enough, but the edge-case scenarios have taken years to solve.
There is debate on how to measure AGI and when it will be “achieved,” but OpenAI has told its biggest backer Microsoft that it believes AGI will be reached once it has created an AI that can generate at least $100 billion in profit. That is a lofty goal considering OpenAI predicts it will generate $12 billion in revenue in 2025 while still losing billions.
At the same time, neither Microsoft nor Google has seen enterprise customers willing to adopt AI tools as fast as they hoped. Instead of charging $20-30 per employee to add AI tools into their bundles, both companies are now shoving AI into their standard bundles and hiking the prices by a couple of dollars respectively.
California-based Vast Space has big ambitions. The company is aiming to launch a commercial space station, the Haven-2, into low Earth orbit by 2028, which would allow astronauts to stay in space after the decommissioning of the International Space Station (ISS) in 2030. In doing so, it is attempting to muscle in on NASA’s plans to develop commercial low-orbit space stations with partner organizations—but most ambitious of all are Vast Space’s goals for what it will eventually put into space: a station that has its own artificial gravity.
“We know that in weightlessness we can live a year or so, and in conditions that are not easy. Perhaps, however, lunar or Martian gravity is enough to live comfortably for a lifetime. The only way to find out is to build stations with artificial gravity, which is our long-term goal,” says Max Haot, Vast’s CEO.
Science Newsletter
Your weekly roundup of the best stories on health care, the climate crisis, new scientific discoveries, and more. Delivered on Wednesdays.
Vast Space was founded in 2021 by 49-year-old programmer and businessman Jed McCaleb, the creator of the peer-to-peer networks eDonkey and Overnet, as well as the early and now defunct crypto exchange Mt. Gox. Vast Space announced in mid-December a partnership with SpaceX to launch two missions to the ISS, which will be milestones in the company’s plan to launch its first space station, Haven-1, later in 2025. The missions, still without official launch dates, will fall within NASA’s private astronaut missions program, through which the space agency wants to promote the development of a space economy in low Earth orbit.
For Vast, this is part of a long-term business strategy. “Building an outpost that artificially mimics gravity will take 10 to 20 years, as well as an amount of money that we don’t have now,” Haot admits. “However, to win the most important contract in the space station market, which is the replacement of ISS, with our founder’s resources, we will launch four people on a [SpaceX] Dragon in 2025. They will stay aboard Haven-1 for two weeks, then return safely, demonstrating to NASA our capability before any competitor.”
Space for One More?
What Vast Space is trying to do, by showing its capabilities, is get involved in NASA’s Commercial Destinations in Low Earth Orbit (CLD) program, a project the space agency inaugurated in 2021 with a $415 million grant to support the development of private low-Earth orbit stations.
The money was initially allocated to three different projects: one from aerospace and defense company Northrop Grumman, which has since exited the progam; a joint venture called Starlab; and Orbital Reef, from Jeff Bezos’ Blue Origin. Vast has no contract with the US space agency, but it aims to outstrip its competitors by showing NASA that it can put a space station into space ahead of these others. The agency will choose which project’s station to back in the second half of 2026.
By doing this, Vast is borrowing from SpaceX’s playbook. Not only has Vast Space drawn some of its employees and the design of equipment and vehicles from Elon Musk’s company, it’s also trying to replicate its approach to market: to be ready before anyone else, by having technologies and processes already qualified and validated in orbit. “We are lagging behind,” Haot says. “What can we do to win? Our answer, in the second half of 2025, will be the launch of Haven-1.”
Haven-1 will have a habitable volume of 45 cubic meters, a docking port, a corridor with consumable resources for the crew’s personal living quarters, a laboratory, and a deployable communal table set up next to a domed window about a meter high. On board, roughly 425 kilometers above Earth’s surface, the station will use Starlink laser links to communicate with satellites in low Earth orbit, tech that was first tested during the Polaris Dawn mission in the autumn of 2024.
Imagine: A switch is flicked and, in a heartbeat, every process spewing deadly pollution into the heavens is replaced with something clean and sustainable. Sadly, even then, the Earth would still tip towards being uninhabitable thanks to all of the carbon we’ve already dumped up there. If we as a species are to survive then all of that junk needs to be pulled back to Earth, and fast. Proponents of Direct Air Capture believe it’s a vital weapon to accomplish that task; its critics say it’s so inefficient that we’d be better off trying anything else first.
Direct Air Capture
Put simply, Direct Air Capture (DAC) is the practice of removing CO2 from the atmosphere by pulling air through a mechanical or chemical filter. Air is typically drawn through a DAC system via one or more fans, while filtering is done with a solid (known as a sorbent) or with a liquid (known as a solvent). Once captured, heat or electricity is applied to the filter material to remove the CO2, both to re-use the filter and get the CO2 ready to move on. It’s this last stage that’s often the most energy-intensive, and therefore costly, part of the process. Given the amount of air that will need to be cleaned (all of it) for this to work, DAC needs to be as energy efficient as possible.
The most cost-effective way to do this is by capping the smokestacks of a carbon-intensive process, like a factory or fossil fuel power plant to prevent more CO2 release. But that does nothing to reduce the excess CO2 already in the atmosphere. That’s why some scientists and entrepreneurs are inclined to gamble on DAC plants in free air to scrub the heavens clean.
The NOAA explains that in 1960, humanity was pumping out 11 billion tons of carbon dioxide into the air each year. Half a century later, and that figure now stands closer to 40 billion, which is why emissions-reduction work is so vital. But even if we did manage to reduce all of our new emissions to zero, we’d still have to address the 950 gigatons or so of CO2 lurking in the atmosphere already. At the time of writing, the CO2 in the atmosphere as recorded by the NOAA’s Global Monitoring Lab at Mauna Loa is 422.38ppm. The scientific consensus is any figure over 350ppm will spell catastrophic doom for humanity and the state of the planet more generally.
This June, the University of Oxford published research saying that if we want to limit warming to just 1.5 degrees (which would be catastrophic), humanity will need to extract between seven and nine billion tons of carbon dioxide out of the air each year by 2050. The COP28 declaration supports signatory nations throwing their weight behind carbon capture technologies. The Intergovernmental Panel on Climate Change (IPCC) says there is no viable pathway to averting climate change unless large volumes of CO2 are pulled from the air. This has been the status quo for a while: In 2017, a coalition of prominent scientists led by Professor Jim Hansen said it was imperative that humanity began mass-removing atmospheric CO2.
What to do with all the CO2
Once DAC has sucked the unwanted carbon out of the air, it needs to be put somewhere. One option, The British Geological Survey explains, is to easily and affordably convert CO2 to its supercritical form, which behaves like a runny liquid. This liquid can then be stored underground after being injected into porous rocks, with old oil fields and coal seams appearing to be ideal places. The oil and gas industry actually uses this approach to boost production in existing fields, as the liquid CO2 fills up the space, pushing more oil toward the extraction site. But the International Energy Agency’s (IEA) briefing paper on Direct Air Capture suggests more than half of all atmospheric CO2 emissions recovered will need to be sequestered.
Obviously, getting more fossil fuels out of the ground to burn does not do very much for the climate, and ideally the governments of the world would just invest in effective carbon capture to prevent us from boiling to death. Fortunately for humanity’s fixation on market solutions, recycling some of the non-sequestered CO2 could become an industry unto itself.
CO2 can also be turned into synthetic fuels in traditional combustion engines. Air travel is the most obvious example, especially given that the size and weight of batteries make it nearly impossible to build an electric jumbo jet. Recovered CO2 can also be used as the base for common non-fuel products including construction materials, in chemical and agricultural products, not to mention putting the fizz in our drinks.
Holocene is one of many companies looking to turn CO2 extraction into a viable, long term business by selling carbon removal credits to big businesses. Its approach is to pull air through water which has been embedded with an amnio acid that binds to CO2. The water and CO2 mix is then combined with guanidine, which turns the CO2 into a solid that can be easily filtered out, allowing the amino acid water to be reused. The solid CO2 is then heated to a low temperature, which separates the guanidine from gaseous CO2, ready for use or sequestration. Holocene believes a reusable solvent (and reusable chemical treatment) combined with the low-temperature heat makes its approach far more cost-effective than that of its rivals.
Mission Zero is also looking to develop a low-cost way of procuring large quantities of CO2 from the atmosphere. It draws air into its hardware and then applies a water-based solvent. But rather than treating this mix chemically, it uses electrodialysis and an ion exchange process to purify the liquid and extract the CO2. From there, the liquid can be reused and the CO2, again, can either be buried underground or, turned into viable products. The company says that its electro-chemical process is similarly far more cost and energy-efficient than many of the other companies operating in this space.
Given the commercial sensitivities involved, it’s not easy to get a real handle on how much it costs to extract CO2 from the atmosphere using DAC in open air. Depending on where you look, the figure can be as much as $600 per ton, but a more common figure is between the $300 and $400 mark. For years, the received wisdom has been that DAC needs to reach a cost of $100 per ton in order to become economically viable.
Earlier this year, a German climate-focused VC firm, Extantia Capital went digging into the source of that $100 shibboleth and traced it back to a paper from early DAC firm Carbon Engineering in 2018 when it published a paper projecting its long-term cost would fall to as little as $94 per ton. Suddenly, the phrase “less than $100 per ton” became the benchmark to which all other DAC companies were held. But, as Extantia’s Torben Schreiter wrote, that figure was also pegged to 2016 dollar prices, so it hasn’t grown with inflation. In 2023, the World Economic Forum said the cost of Direct Air Capture had to fall “below $200 per ton” before it would be widely adopted.
It doesn’t matter if your aims are environmental or industrial, we know the volume of CO2 that needs to be extracted from the atmosphere is significant. For that to be viable, the cost of extraction needs to fall by a significant degree. A more mature metric would be that pricing falls in line with, or below, the perpetually in-flux cost of carbon dioxide as a commodity.
“All these DAC approaches use a bunch of energy,” said Holocene’s CEO Keeton Ross. Ross says it’s the cost of this energy that is keeping the price of Direct Air Capture higher than it needs to be. He believes heat-based systems (like Holocene’s) will likely win out in the end because heat can come from any number of affordable sources. These claims of being able to cut the costs of DAC were compelling enough that in September Google invested in Holocene and pledged to buy carbon credits from it in future.
Dr. Nicholas Chadwick, CEO of Mission Zero, told Engadget his company is targeting around $350 per ton by 2026, but that figure is “dependent on a specific price of electricity.” That price, he believes, is "substantially better than what’s available in the commodity market,” making it a no-brainer for industries that are reliant on CO2 to start buying from Mission Zero.
Roadblocks
The obvious objection to Direct Air Capture is that while there’s a lot of carbon dioxide in the atmosphere, it’s still a relatively small proportion of the whole. I’ve heard the process described as panning for gold in the ocean, and the energy costs alone will make it unfeasible on the scale necessary. In 2022, the Institute for Energy Economics and Financial Analysis bluntly claimed the process “simply won’t work.” Part of the objection was that it can be (and is) used for enhanced oil recovery, but also that when DAC facilities are up and running, they’re often far less effective at capturing CO2 than initially promised.
In 2023, a piece published by the Bulletin of Atomic Scientists expressed outrage that the US Department of Energy invested $600 million in one such project. Its authors said the energy costs required to filter that much air to extract just 0.04 percent of its total are far in excess of other, already less expensive ways to reduce emissions, and that there won’t be any dramatic improvement in the physics and chemistry that will make Direct Air Capture dramatically more efficient. They said, bluntly, "It’s just dumb to build today something that we won’t need for 50 years, if ever."
Chadwick said a lot of the criticisms around DAC center on its technical feasibility, which he says is the wrong point. “There are tons of industrial processes where the thermodynamics are terrible, look at ammonia,” he said, “it took years and years to get the yields to where they are right now.” What drove those otherwise inefficient processes was the “economic imperative for it in the marketplace,” he said. “When someone proves they can do [Direct Air Capture] for $200 a ton, all of these arguments go away.”
Both Chadwick and Ross spoke about the importance of scale to help accelerate the still quite nascent industry. In 2023, Carbon Engineering, 1PointFive and Occidental broke ground on the Stratos plant in Texas that, when completed, is expected to suck 500,000 tons of CO2 out of the air per year. Both are optimistic, however, that the projects that are currently under construction will help engineers solve those questions. It’s a long, long way to go before we get to the billions of tons experts believe we’ll need to be extracting to have a hope of survival.
This article originally appeared on Engadget at https://ift.tt/BlTXryZ
In the future, oysters might help in the global fight against antibiotic-resistant bacteria. A protein found in the blood–or hemolymph–of the Sydney rock oyster (Saccostrea glomerata) appears to kill bacteria, while also increasing the effectiveness of some of the antibiotics.. The findings are detailed in a study published January 21 in the journal PLOS ONEand could be used for developing new ways to fight bacterial infections.
Over the past half-century, bacteria have developed resistance to many conventional antibiotics used to treat illnesses like pneumonia and strep throat due to their overuse and misuse. A 2024 study predicts that antimicrobial resistance will cause 40 million deaths by 2050. Discovering and developing new antibiotics is a crucial public health and safety measure.
“Most organisms have natural defence mechanisms to protect themselves against infection,” Kirsten Benkendorff, a study co-author and interdisciplinary marine scientist at Southern Cross University in Australia, said in a statement. “Oysters are constantly filtering bacteria from the water, so they are a good place to look for potential antibiotics.”
CREDIT: Southern Cross University
A 2024 study by the same team identified a protein found in the hemolymph of the Sydney rock oyster that inhibits Streptococcus pneumoniae. This bacterium causes respiratory infections in humans, including tonsilitis and pneumonia and is a leading cause of death in children under five and hospitalization in older adults.
Bacteria like S. pneumoniae that cause infection often escape from antibiotics and the immune system by forming biofilms to protect themselves. These biofilms are communities of microorganisms that attach themselves to surfaces in a sticky, protective matrix. The study found that oyster blood can kill bacterial pathogens in the biofilms.
“The oyster hemolymph proteins were found to prevent biofilm formation and disrupt biofilms, so the bacteria remain available to antibiotic exposure at lower doses,” said Benkendorff. “The hemolymph contains a mixture of proteins with known antimicrobial properties. These may act to directly kill the bacteria, as well as preventing them from attaching to the cell surface.”
Crucially, the oyster hemolymph proteins were not toxic to human lung cells. This means it might be possible to optimize the proteins to create a safe and effective dose. While a new antibiotic developed from oyster blood is still quite some time away, the findings are a next step in developing new methods for treating serious infections.
“It provides great opportunities for collaboration between researchers, aquaculture and pharmaceutical industries,” said Benkendorff. “In the meantime, slurping oysters could help keep the respiratory bugs away. Oysters contain zinc which boosts the immune system and they have really good polyunsaturated fatty acids and vitamins that also help modulate immunity.”
German launch startup Rocket Factory Augsburg has taken a step towards a first launch by gaining a first-ever license to vertically launch an orbital rocket from mainland Europe.
Rocket Factory Augsburg (RFA) is working towards a first launch of its RFA ONE rocket from SaxaVord Spaceport on the Shetland Islands off the coast of Scotland. Being issued a spaceflight operator license by the UK Civil Aviation Authority (CAA) is a major administrative step towards reaching orbit. It is also a major first for the European mainland, as the continent’s launch sector enters a new era of commercialization.
"This is a groundbreaking moment for RFA and for Europe’s space industry," Jörn Spurmann, co-founder and Chief Commercial Officer of RFA, said in a statement. "Securing the first-ever launch license outside ESA’s established site in Kourou is not just a regulatory milestone — it’s a powerful endorsement of our technical excellence and a turning point for European space innovation. This license marks Europe’s bold step toward independent, competitive, and sustainable space access," Spurmann said.
"This license approval is a landmark moment, as it signals the start of vertical rocket launches from European soil, said Matt Archer, UK Space Agency Director of Launch, ISAM, and Space Sustainability. "The achievement, driven by effective collaboration between RFA, SaxaVord Spaceport, the regulator and government partners, highlights the growing strength of the UK’s launch capabilities and our international relationships.
The license allows RFA to launch up to 10 times per calendar year, and no more than two launches within a month. The CAA licensing process emphasizes public safety and environmental impact, conducting several tests and assessing safety cases. An increase in launch cadence would require a new assessment.
RFA Is now training its sights on the final technical preparations for the first test flight. The company reached the point of performing a static-fire test of the RFA ONE rocket first stage at SaxaVord Spaceport in August last year, but the test ended in a dramatic explosion, ending plans of a first flight in 2024. Now, the company appears on track for a launch in 2025.
The company is now focused on building the RFA ONE rocket’s first stage with nine Helix staged-combustion engines, and then conducting a full hot fire test on the launch pad at SaxaVord Spaceport, Scotland. All other systems, including the second stage, third stage (the Redshift orbital transfer vehicle) and the fairing are already flight qualified.
Get the Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
RFA did not provide a timeline for its planned first launch. Under the licence, RFA would need to inform the CAA of its plans to launch 60 days ahead of the event.
The company is one of a number of launch startups in Europe looking to launch their first orbital rockets. Others include Isar Aerospace and Hyimpulse, also from Germany, PLD Space of Spain, and Orbex and Skyrora from the United Kingdom.
A CAA official told media during a briefing on the license and licensing process that the authority is currently assessing applicants for licenses for seven different launch companies, which could not be named.
SaxaVord is not the only spaceport located in the European mainland looking to host launches, with Norway’s Andøya Spaceport and Sweden’s Esrange Arctic spaceport also attracting launch partners.
While the above are catering to vertical orbital launches, the UK hosted a "horizontal" launch, when the now-defunct Virgin Orbit launched a LauncherOne rocket using its Cosmic Girl carrier plane, which took off from Spaceport Cornwall in January 2023. That launch suffered an anomaly and ended in failure.
The issuing of a launch license indicates growth in the European commercial space industry, and the development highlights innovation and competition in the sector.
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.
Scientists are getting closer to something that wouldn’t look out of place in a science fiction film: bionic limbs that can sense and convey touch to their users.
In a new study published this week, researchers debuted a bionic hand system that can reportedly reproduce the most complex tactile sensations seen to date. Scientists at the Cortical Bionics Research Group developed the novel brain-computer interface (BCI) device, which was tested out by volunteers with spinal cord injuries.
Across a series of experiments, the researchers were able to translate and relay sensations tied to motion, curvature, and orientation that allowed the volunteers to perform complicated tasks with their bionic limb. The researchers say their device has now accomplished a new level of artificial touch.
There have been some important advances in prosthetic and bionic limb technology in recent years, but these limbs are currently still a long way away from fully approximating the complex nature of human touch. Some scientists have begun to use intracortical microstimulation (ICMS) of the brain’s somatosensory cortex to bridge this gap, since experiments have shown that such stimulation can produce vivid tactile sensations on people’s skin. According to study researcher Giacomo Valle, however, early attempts with ICMS have largely focused on reproducing sensation location and intensity. But there’s much more that goes into feeling something than just those two aspects.
“While contact location and force are critical feedback components, the sense of touch is far richer than this, also conveying information about the texture, material properties, local contours, and about the motion of objects across the skin. Without these rich sensations, artificial touch will remain highly impoverished,” Valle told Gizmodo. In their new study, published Thursday in Science, Valle and his team believe that they’ve gone a crucial step further with ICMS.
The researchers recruited two people with spinal cord injuries for their experiments. The volunteers were first given brain implants in the sensory and motor regions of the brain that govern the hands and arms. Via these implants, the researchers recorded and then deciphered the different patterns of electric activity produced by the volunteers’ brains as they thought about using their paralyzed limbs. The volunteers were then connected to a BCI device that acted as a bionic limb. With their thoughts alone, the volunteers could control the limb, which was outfitted with sensors that communicated with the brain implants. The researchers were then able to translate and send more complex sensations related to touch through the bionic limb into the volunteers’ brain implants.
“In this work, for the first time, the research went beyond anything that has been done before in the field of brain-computer interfaces—we conveyed tactile sensations related to orientation, curvature, motion and 3D shapes for a participant using a brain-controlled bionic limb,” said Valle, a bionics researcher at Chalmers University of Technology. “We found a way to type these ‘tactile messages’ via microstimulation using the tiny electrodes in the brain, and we found a unique way to encode complex sensations. This allowed for more vivid sensory feedback and experience while using a bionic hand.”
The volunteers could not only feel more layered sensations like touching the edge of an object—these sensations felt as if they were coming from their own hands. The added input also appeared to make it easier for the volunteers to perform complex tasks with the bionic limb more accurately, such as moving an object from one place to another. And it’s this richness, Valle said, that “is crucial for achieving the level of dexterity, manipulation, and a highly dimensional tactile experience typical of the human hand.”
These are still early days, the researchers note. More complex sensors and robotic technology, such as prosthetic skin, will be needed to truly capture the sensations that researchers can now encode and convey to a user, Valle says, and more advanced brain implants will also be needed to increase the array of sensations that can be stimulated. But Valle and his team are hopeful that such advances can be made, and that a truly human-feeling bionic limb is well within the realm of possibility.
“Although many challenges remain, this latest study offers evidence that the path to restoring touch is becoming clearer. With each new set of findings, we come closer to a future in which a prosthetic body part is not just a functional tool, but a way to experience the world,” he said.
The immediate next phase of Valle and his team’s research will be to test their BCI systems in more naturalistic settings, such as at patients’ homes. And their ultimate goal is to improve the independence and quality of life of people with disability.
Imagine armor as light as fabric yet stronger than steel, built from materials that link together like molecular chainmail. Scientists may have just taken the first step toward making it a reality.
A team of researchers led by Northwestern University scientists has developed what might be the first two-dimensional (2D) mechanically interlocked material, similar to links in chainmail. The material, detailed in a January 16 study published in the journal Science, is exceptionally flexible and strong, with promising applications in products such as lightweight body armor and ballistic fabrics.
The researchers built the material on a nanoscale level, meaning its individual components are measurable in nanometers. It’s technically a polymer: a substance made of large molecules, which are themselves made up of smaller chemical units called monomers. Examples of polymers include proteins, cellulose, and nucleic acids.
The 2D mechanically interlocked material is a polymer structure that uses mechanical bonds—bonds with physical interlocking, as opposed to, for example, covalent bonds, which usually make up polymers and involve the sharing of electrons. The material features 100 trillion mechanical bonds per 0.16 square inch (1 square centimeter), which is the highest density of mechanical bonds ever made, according to the researchers.
“We made a completely new polymer structure,” said study co-author William Dichtel of Northwestern University in a university statement. “It’s similar to chainmail in that it cannot easily rip because each of the mechanical bonds has a bit of freedom to slide around. If you pull it, it can dissipate the applied force in multiple directions. And if you want to rip it apart, you would have to break it in many, many different places. We are continuing to explore its properties and will probably be studying it for years.”
The biggest challenge in creating mechanically interlocked molecules lies in figuring out how to guide polymers into forming mechanical bonds. Madison Bardot of Northwestern University, who led the study, is credited with coming up with a new method to achieve this. The team positioned x-shaped monomers into a crystalline structure (a specific ordered arrangement) and reacted the crystals with another molecule. This reaction created mechanical bonds within the crystals. The final product is 2D layers of interlocked polymer sheets made of these bonds between X-shaped monomers, whose gaps researchers filled with more X-shaped monomers.
“It was a high-risk, high-reward idea where we had to question our assumptions about what types of reactions are possible in molecular crystals,” said Dichtel. The resulting material is incredibly strong, yet still flexible and easy to manipulate, because the individual sheets of interlocked molecules come apart from each other when the polymer is dissolved in a solvent.
“After the polymer is formed, there’s not a whole lot holding the structure together,” he added. “So, when we put it in solvent, the crystal dissolves, but each 2D layer holds together. We can manipulate those individual sheets.”
While previous researchers had made mechanically bonded polymers in very small quantities that would have been difficult to mass produce, the team’s new method is surprisingly scaleable. They made over one pound (0.5 kilograms) of the material, and suggest the possibility of making even more.
Even a small percentage of the new polymer structure, however, can improve other substances. The researchers made a material composed of 97.5% Ultem fiber (an extremely tough material in the same family as Kevlar) and 2.5% of the 2D polymer, and concluded that the mixture had made the former significantly stronger.
“We have a lot more analysis to do, but we can tell that it improves the strength of these composite materials,” Dichtel continued. “Almost every property we have measured has been exceptional in some way.”
This incredibly strong and flexible material might just be the armor the future has been waiting for.