With summers now warmer than ever, enjoying the great outdoors inside a stiflingly hot tent is becoming less appealing than a vacation spent relaxing in an air-conditioned hotel room. That could soon change, however, as a University of Connecticut researcher has created a new fabric that could potentially cool the inside of a tent by up to 20 degrees Fahrenheit.
The fabrics currently used to make tents are engineered to block out winds and water to help keep their inhabitants dry and comfortable, but they tend to work both ways, preventing hot air inside the tent from escaping. That’s great when temperatures dip in the evening, but even with plenty of ventilation, on a hot Summer’s day, the inside of a tent can feel sweltering.
You can always pack a portable air conditioner to drop the temperature inside your tent, but those require one ingredient that’s often in short supply at rural campsites: electricity. A solar panel simply isn’t going to generate enough power to keep a portable AC unit, or even a simple fan, running indefinitely, and you don’t want to have to carry a backpack full of batteries.
Inspired by how, “plants wick water from the ground and then sweat to cool themselves,” Al Kasani, a researcher at the University of Connecticut’s Center For Clean Energy Engineering, created a self-cooling tent fabric. It remains thin and light so a tent can still be easily packed down, but the fabric has been enhanced with titanium nanoparticles that pull water from reservoirs located at the base of a tent and spread it across the fabric’s surface, where it evaporates and creates a cooling effect that drops the temperature inside the tent by up to 20 degrees F.
Kasani estimates that just a gallon of water can keep a tent cool for up to 24 hours, and the effect will work with either water sourced from a faucet at a campsite or pulled from a stream in a more rural setting. In other words, the evaporative cooling isn’t going to stop working if you don’t use purified clean water.
Accessible for all Safe Haven prioritizes your needs with flexible and individuated substance abuse treatment, specifically opioid & alcohol addiction.
Although it’s going to be a while before we see this upgraded fabric showing up in camping gear—the material is still in the research phases—according to the university, “industry interest in Kasani’s technology has been high,” and in a few years, if it goes mainstream, it could help make roughing it in high temperatures feel not so rough.
In late 2021, the Infrastructure Investment and Jobs Act passed, creating the National Electric Vehicle Infrastructure Formula Program (NEVI) to support the creation of a nationwide fast-charging network. There’s much work to do, as a new study from the Great Plains Institute shows a need for more than 1,000 DC fast-charging (DCFC) stations to meet the program’s goals.
The study looked at non-Tesla DCFC stations, of which there are 4,943 in the contiguous 48 states. Among them, only 509 stations meet the requirements laid out under the NEVI program, which include:
Charging stations must have at least four DCFC ports with CCS connectors and the ability to charge four EVs simultaneously at 150 kW each for a combined capacity of 600 kW or more.
Stations must be spaced no more than 50 miles apart on designated corridors and located within one mile of the corridor.
Another 1,104 stations are required for there to be a compliant charging location every 50 miles of interstate highway, including a first phase of 1,084 chargers on highways designated as EV alternative fuel corridors and an additional 20 along other corridors. Alternative Fuel Corridors are proposed by states and recognized by the Federal Highway Administration (FHWA) as part of a network of alternative fuel sources, such as hydrogen, propane, EV charging, and natural gas.
Though the study identifies a need for more charging stations, it notes the possibility that the 4,434 non-compliant chargers could be upgraded to meet requirements. It also identifies 42,212 Level 2 public chargers in the country but did not include them in the data because of their long charging times.
The program set aside $5 billion and puts the buildout of the charging network in the states’ hands. States are expected to use NEVI money to build chargers along the designated corridors before moving on to other highways. The program rules note that a state can propose other non-designated areas, but the designated corridors have to be certified as meeting the two program criteria first. Even with this effort, the study notes that chargers located every 50 miles might not meet demand in high-traffic areas.
Genetic engineering company Colossal Biosciences said Tuesday that it will try to resurrect the extinct dodo bird, and it’s received $150 million in new funding to support its “de-extinction” activities.
Adding the dodo to its official docket brings Colossal’s total de-extinction targets to three: the woolly mammoth (the company’s first target species, announced in September 2021), and the thylacine, a.k.a. the Tasmanian tiger, the largest carnivorous marsupial.
Colossal’s stated goal is not to simply bring these creatures back for vibes; its contention is that reintroducing the species to their respective habitats would help restore a certain amount of normalcy to those environments.
Mammoths died out about 4,000 years ago on Wrangel Island, off the northeastern coast of Russia. The dodo, a species of flightless bird native to the island of Mauritius, was gone by 1681. The last known thylacine died at a zoo in Tasmania in 1936. Scientists have sequenced the genomes of all three species—the mammoth’s in 2015, the dodo’s in 2016, and the thylacine’s in 2018.
G/O Media may get a commission
The latter species were driven to extinction by humankind; humans hunted the dodo, introduced predators and pests to its environment, and contributed to its habitat loss. Humans may have played a role in mammoth extinction as well, but the dodo and the thylacine are classic examples of our ability to wipe out species at extraordinary speed.
Following European colonization of Tasmania, settlers cast the thylacine as a threat to sheep flocks (though this threat was hugely overblown), and the Tasmanian government eventually put a bounty on the marsupial’s head. Some experts believe the thylacine may have persisted in the wild for several decades after 1936, but the writing was on the wall for the iconic species.
Colossal also said it is creating an Avian Genomics Group, which will oversee the efforts to resurrect the stocky dodo. The blue-gray bird weighed as much as 50 pounds and had a distinctive curved beak. Perhaps due to the lack of natural predators on Mauritius, the dodo evolved to be flightless. Europeans encountered the birds in 1507, and by about 150 years later they were extinct.
If the company’s work pans out—and that’s a big if—proxy species of those extinct animals will be brought to bear. That’s because the genetically engineered animals produced by Colossal would not be a bonafide mammoth, dodo, or thylacine.
In 2016, the International Union for Conservation of Nature’s Species Survival Commission published a report denoting ground rules for creating proxy species. “Proxy is used here to mean a substitute that would represent in some sense (e.g. phenotypically, behaviourally, ecologically) another entity – the extinct form,” the commission stated, adding that “Proxy is preferred to facsimile, which implies creation of an exact copy.”
De-extinction is something of a misnomer, as this process, if successful, will yield science’s best analogue for an extinct creature, not the creature itself as it existed in the past. De-extinction methods generally rely on using a living creature’s genetics in the resurrection process. That means any 21st-century mammoth will have at least some modern elephant DNA imbued in it, and any nascent thylacine would be produced from the genome and egg of a related species.
Colossal intends to produce its proxy mammoth from an artificial womb, according to National Geographic, rather than using an Asian elephant, which is endangered.
What’s more, behavioral traits of an animal are impossible to extrapolate from a genome alone. How will we know if the mammoth we produce actually acts the way the originals did? Thankfully, there’s some video of thylacines, but other details of the animal—such as the circumstances that may have elicited one of its trademark double-yip vocalizations, of which there are no recordings—are lost to time.
A good reference for the Colossal work is a paper published last year in Current Biology, in which a team of geneticists developed a proof-of-principle model for resurrecting the Christmas Island rat, a species closely related to the extant Norway brown rat.
The team was confident they could reproduce aspects of the extinct rat where areas of the two animals’ genomes largely overlapped; namely genes involving keratin and details like fur color and the shape of its ears. But genes related to the extinct rat’s olfaction (its sense of smell) and its immune responses had little corollary in the genome of the living Norwegian rat. So if the team wanted to bring the rat back in some form, it would need a spoofed immune response and olfactory system.
Similarly, it will be difficult to know whether a proxy thylacine, dodo, or mammoth is behaving as a bonafide version of the animal may have behaved. Lots of animal behavior is taught from parents, but a resurrected mammoth would be alone in the world.
The current plan for the thylacine is to plant the nucleus from a “Thylacine-like cell” into the egg of a genetically engineered Dasyurid egg. Dasyurids are a group of marsupials that includes quolls which the Colossal team deemed the best fit for a thylacine redux. The host Dasyurid’s genome would be engineered to make it more “Thylacine-like,” per Colossal’s website.
Whether or not proxy species are actually produced by startups like Colossal, genetics research done in the name of creating them could help us better understand the relationships between species and how to protect living creatures from threats like disease.
Better understanding species—extinct and extant—on a genetic level is a good thing. How that technology is used, and by whom, is an issue that needs to be handled carefully.
Chinese search giant Baidu aims to introduce a ChatGPT-like AI service that gives users conversational results, Bloomberg has reported. It’ll be based on the company’s Ernie system, a large-scale machine-learning model trained over several years that "excels at natural language understanding and generation," Baidu said in 2021.
Open AI’s ChatGPT has taken the tech world by storm of late, thanks to its ability to answer fact-based questions, write in a human-like way and even create code. Microsoft invested $1 billion in Open AI back in 2019, and reportedly plans to incorporate aspects of ChatGPT into its Bing search engine.
Google, meanwhile, likely sees the technology as a threat to its search business and plans to accelerate development of its own conversational AI technology. CEO Sundar Pichai reportedly declared a "code red" over ChatGPT and may be preparing to show off 20 or more AI-products and a chatbot for its search engine at its I/O conference in May.
Baidu has reportedly seen lagging growth in search and sees ChatGPT-like apps as a potential way to leapfrog rivals. "I’m so glad that the technology we are pondering every day can attract so many people’s attention. That’s not easy," he said during a talk in December, according to a transcript seen by Bloomberg.
ChatGPT has largely drawn positive attention, but the downsides have come into focus as well. Technology news site CNET was forced to correct AI-written articles due to errors and concerns about plagiarism. And New York City public schools recently banned ChatGPT over cheating concerns, because it can create articles and essays that can be difficult to distinguish from student-created content.
Shutterstock, one of the internet’s biggest sources of stock photos and illustrations, is now offering its customers the option to generate their own AI images. In October, the company announced a partnership with OpenAI, the creator of the wildly popular and controversial DALL-E AI tool. Now, the results of that deal are in beta testing and available to all paying Shutterstock users.
The new platform is available in “every language the site offers,” and comes included with customers’ existing licensing packages, according to a press statement from the company. And, according to Gizmodo’s own test, every text prompt you feed Shutterstock’s machine results in four images, ostensibly tailored to your request. At the bottom of the page, the site also suggests “More AI-generated images from the Shutterstock library,” which offer unrelated glimpses into the void.
But, be warned before you jump on the chance to replace all your standard stock image favs with AI constructs: The idea of using artificial intelligence to pump out “art” is an increasingly divisive one. Generative AI is a landscape fraught with potential legal and ethical complications.
Why all the worry?
All AI is trained on datasets, i.e. massive aggregations of material that teach it what to aim for. And for AI image generators, those training sets contain images made by humans—often human artists for whom their work is their livelihood.
One of Shutterstock’s main competitors, Getty Images, has said it wouldn’t be wading into the murky waters of AI anytime soon. The site banned AI-generated images on its platform. And, with regards to the technology, Getty’s CEO, Craig Peters said, “I think that’s dangerous. I don’t think it’s responsible. I think it could be illegal,” in an interview with The Verge.
It’s obvious that AI must be pulling its “inspiration” from the work of real, live people. But it’s difficult to pin down exactly when and where AI generators steal from visual artists. Interpreting artistic style can seem subjective. On the other hand, AI’s acts of plagiarism are much more apparent—though no more egregious—in AI-produced text. Clearly, if not approached carefully, artificial intelligence could pave the way for a theft crisis in creative fields.
How is Shutterstock trying to get around the issue?
In an attempt to pre-empt concerns about copyright law and artistic ethics, Shutterstock has said it uses “datasets licensed from Shutterstock” to train its DALL-E and LG EXAONE-powered AI. The company also claims it will pay artists whose work is used in its AI-generation. Shutterstock plans to do so through a “Contributor Fund.”
That fund “will directly compensate Shutterstock contributors if their IP was used in the development of AI-generative models, like the OpenAI model, through licensing of data from Shutterstock’s library,” the company explains in an FAQ section on its website. “Shutterstock will continue to compensate contributors for the future licensing of AI-generated content through the Shutterstock AI content generation tool,” it further says.
The first pay-out to contributing creators was scheduled to be distributed in December, at the end of the company’s last fiscal quarter of 2022. It’s unclear how many contributors were paid last month, and how much was distributed, if any. Gizmodo reached out to Shutterstock with questions about this process, but did not immediately receive a response.
Further, Shutterstock includes a clever caveat in their use guidelines for AI images. “You must not use the generated image to infringe, misappropriate, or violate the intellectual property or other rights of any third party, to generate spam, false, misleading, deceptive, harmful, or violent imagery,” the company notes. And, though I am not a legal expert, it would seem this clause puts the onus on the customer to avoid ending up in trouble. If a generated image includes a recognizable bit of trademarked material, or spits out celebrity’s likeness—it’s on the user of Shutterstock’s tool to notice and avoid republishing the problem content.
But does it work?
As far as effectiveness goes, it took Gizmodo five different prompts similar to “robot drawing a picture of a robot” before the AI actually spit out something close enough to that concept. Again, each results page provides four AI-generated options. Of the twenty total images the machine generated, only the one included at the top of this post clearly showed some representation of a robot holding a drawing/painting. The others were… a mixed bag.
For now, and for the foreseeable future, I think I’ll be sticking to Shutterstock’s more standard offerings.
The project is among those that make China the world leader in exporting face recognition, according to a study by academics at Harvard and MIT published last week by the Brookings Institution, a prominent think tank.
The report finds that Chinese companies lead the world in exporting face recognition, accounting for 201 export deals involving the technology, followed by US firms with 128 deals. China also has a lead in AI generally, with 250 out of a total of 1,636 export deals involving some form of AI to 136 importing countries. The second biggest exporter was the US, with 215 AI deals.
The report argues that these exports may enable other governments to perform more surveillance, potentially harming citizens’ human rights. “The fact that China is exporting to these countries may kind of flip them to become more autocratic, when in fact they could become more democratic,” says Martin Beraja, an economist at MIT involved in the study whose work focuses on the relationship between new technologies like AI, government policies, and macroeconomics.
Face recognition technology has numerous practical applications, including unlocking smartphones, providing authentication in apps, and finding friends in social media posts. The MIT-Harvard researchers focused on deals involving so-called smart city technology, where face recognition is often deployed to enhance video surveillance. The research used information on global surveillance projects from the Carnegie Endowment for International Peace and data scraped from Chinese AI companies.
In recent years US lawmakers and presidents have expressed concern that China is gaining an edge over the US in AI technology. The report seems to offer hard evidence of one area where that shift has already occurred.
“It bolsters the case for why we need to be setting parameters around this type of technology,” says Alexandra Seymour, an associate fellow at the Center for New American Security who studies the policy implications of AI.
Further efforts to limit the export of face recognition from China could perhaps take the form of sanctions on countries that import the technology, Seymour says. But she adds that the US also needs to set an example to the rest of the world in terms of regulating the use of facial recognition.
The fact that the US is the world’s second largest exporter of face recognition technology risks undermining the idea—promoted by the US government—that American technology naturally embodies values of freedom and democracy.
When you tune into Twitch streamer Perrikaryal’s channel, you might see her playing FromSoftware’s role-playing game epic Elden Ring with fourteen, unfamiliar black sensors stuck to her scalp. It’s her—as she said during an informational stream earlier today—“just for fun” electroencephalogram (EEG) device, something researchers use to record the brain’s electrical activity, which she’s repurposed to let her play Elden Ring hands-free.
“Okay what and how,” publisher Bandai Namco responded to a clip of Perri (whose name seems to refer to the perikaryon, the cell body of a neuron) describing how she linked brain activity to key binds to help her play the game, shared by esports reporter Jake Lucky on Twitter.
Cue the disbelief (“I’ve gotten a lot of stuff online being like, […] ‘are you for real?’” Perri says in that Twitter clip) and cries of Ex Machina.
It does look incredible—in the clip, you see Perri simply say “attack” to her screen like a gamer girl Matilda and then, after a short delay, her Elden Ring character responds by casting Rock Sling at an irritated boss. But I spent my undergrad fixing eye-tracking devices to my friends’ heads while they helped me fill my lab requirements, and I know that, although brain technology can look complicated, some of it was still easy enough for me as a 19-year-old. So I reached out to my former classmate, University of Michigan cognitive neuroscience PhD candidate Cody Cao, for his thoughts.
G/O Media may get a commission
Officially-licensed socks Sock Affairs wants you to enrobe your feet in their officially licensed socks with art from Pink Floyd and AC/DC records.
“EEG has really good temporal resolution,” he said, “meaning that the collected neural response to gaming stimuli is down to milliseconds. If the neural responses corresponding to available actions present vastly different neural patterns, algorithms can decode or differentiate which is which after training. Then, you play the game with EEG.”
But playing a game with your brain—something Elon Musk tried to shock the public with in 2021, when his brain-computer interface company Neuralink released a video of a monkey playing Pong using its technology—won’t give you an advantage.
“Decoding is still janky,” Cao told me, “60 percent to 70 percent accuracy is considered pretty good,” compared to 90 to 100 percent accuracy in performing an action manually (which also requires your brain!).
“It takes algorithms a lot of training to get to an acceptable performance. They likely need to experience a lot of different examples of the same thing (like Perri saying ‘attack’ before attacking) to be able to account for a vast majority of attacks,” Cao continued. “It’s like FaceID on your iPhone—it gets better with the more examples it sees.”
Perri also emphasized in her stream today that she isn’t necessarily innovating, but bringing the possibilities of EEG usage to the general public’s attention.
“It’s not that crazy, it’s really easy to do. And it’s been done since 1988,” she said about gaming with her brain. “It’s not necessarily anything new that I’m doing, I’m just not sure that it’s very well known.” But now you know, and maybe you’ll figure out how to mind control me a grilled cheese that doesn’t hurt my stomach next.