ChatGPT Can Turn Pokemon Emerald Into A Text-Based Adventure Game

https://www.gamespot.com/articles/chatgpt-can-turn-pokemon-emerald-into-a-text-based-adventure-game/1100-6512442/


If you thought Pokemon would be fun as a text adventure game, someone has used ChatGPT-4 to turn Pokemon Emerald into one.

As spotted by Polygon, Twitter user Dan Dangond has recently been using the newest version of the AI language model to turn Pokemon Emerald into a text-based adventure game. Dangond shared the discovery on Twitter, noting how you can just ask the software to play the game, and in a thread, observed how well it worked–and sometimes, how well it didn’t work.

The first tweet starting the Pokemon journey is almost like a speedrun, putting you right into the action where Emerald’s Professor Birch is being chased by a Poocheyena, prompting you to choose from the three starters. Dangon couldn’t have made a wrong choice with which starter, but did opt for Mudkip, and battle options came out as numbers to choose from.

It all looks like it works pretty smoothly, Dangond even tested out using Water Gun, but ChatGPT-4 knew it is a move Mudkip can’t learn until level 10, so didn’t use it. Interestingly, for those that hate the grind, you can just ask it to do a training montage, boosting Mudkip to level 8.

Later on, Dangond asked the software to head to a particular route and just simulate what would happen there, leading his Mudkip to become level 10, and automatically catching a Ralts, which does make catching ’em all a lot quicker, but removes a lot of the challenge. There were some other problems along the way, like the software not knowing the position of certain routes and towns. At one point it also didn’t know that Nincada is Bug/Ground type, and didn’t account for extra damage when Mudkip used Water Gun, but it did take these things into account when it was corrected.

ChatGPT is being used in a variety of ways at a point in time where AI-driven software is becoming more popular, and more powerful. Recently it was used to solve a Fortnite mystery, but ultimately its use in games won’t produce anything like a GTA-killer any time soon.

via GameSpot’s PC Reviews https://ift.tt/nmqCz4U

March 17, 2023 at 10:18AM

Tesla opening its Supercharger network to rivals could be a brilliant marketing move from a company that famously doesn’t advertise

https://www.autoblog.com/2023/03/17/tesla-opening-its-supercharger-network-to-rivals-could-be-a-brilliant-marketing-move-from-a-company-that-famously-doesn-t-advertise-with-one-big-risk/


For the first time,
Tesla is opening up its Superchargers in the US to drivers of non-Tesla cars.
Robert Knopes/UCG/Universal Images Group via Getty Images
  • Tesla plans to open 3,500 of its fast-charging plugs to all electric-car owners by the end of 2024. 
  • Until now, Tesla’s Supercharger network was only for Tesla owners in the US. 
  • Some experts say the plan could boost Tesla’s brand and ultimately help it sell more cars. 

If you own any electric car that isn’t a Tesla, you’ve probably gazed longingly at the brand’s sleek Supercharger stations, which are common and famously easy to use, but historically off-limits to outsiders. 

But that’s all changing. Tesla has decided to open up thousands of its roadside fast-charging plugs to all electric-vehicle owners by the end of 2024. The move brings Elon Musk’s firm a new revenue stream and allows it to access public funding for charging infrastructure. Some industry experts also think welcoming outsiders into Tesla’s walled garden could be a smart marketing move, particularly for a company that rejects traditional advertising

According to Loren McDonald, CEO of EV industry consultancy EVAdoption, the Supercharger network serves as Tesla’s single biggest marketing tactic. Light-years ahead of other charging providers in terms of reliability, convenience, and number of locations, the sprawling network helps relieve the anxieties of ditching gasoline and attracts buyers. 

Inviting owners of electric Fords and Porsches to plug in at some locations may be Musk’s crafty way of building the Tesla brand and showcasing its technology to potential customers, McDonald told Insider. 

“That additional brand exposure is probably a big part of the reasoning behind this,” he said. 

Sam Abuelsamid, an auto industry analyst at Guidehouse Insights, agrees that expanding Supercharger access could help Tesla sell more cars. Charging takes a while and is generally a more social activity than quickly grabbing gas, presenting an opportunity for Supercharger patrons to get familiar with Tesla’s vehicles, he told Insider. 

“Because Tesla doesn’t have traditional dealerships, there are a lot fewer places where you can just stop and browse,” he said. 

Tesla did not respond to a request for comment.

After years of tremendous sales growth, Tesla has faced questions recently about how whether the voracious appetite for its cars will last. This year, it’s slashed prices across its lineup in a bid to move more vehicles. 

The charging plan is a gamble and could also have the opposite effect, experts said. Some Tesla drivers may realize they don’t need to buy from Musk to reap the benefits of Supercharger infrastructure. They may peek inside a Rivian R1T or schmooze with a Volkswagen ID.4 owner and realize a different vehicle better fits their needs.

“When the Ford Mustang Mach-E driver plugs in next to the Tesla Model Y driver and they chat, who knows? It could go either way,” McDonald said. 

Analysts see another risk: A boom in demand for limited charging stalls could frustrate Tesla owners by crowding popular stations. Moving forward, Tesla will need to be careful not to degrade the experience too much for its existing customers, Abuelsamid said.

“You don’t want to abandon the customer base that got you to where you are today,” he said. “But at the same time you ideally want to get some of those other potential advantages.”

via Autoblog https://ift.tt/27gQmFh

March 17, 2023 at 07:59AM

Candela’s C-8 Is a Boat That Flies

https://www.wired.com/story/candela-c-8-polestar-hydrofoil/


I’m on a boat. The boat does not float. At least not in the traditional sense. The manufacturer says somewhat enthusiastically that “it flies.” In reality it hovers two feet above the water, propped up on stilts attached to two horizontal carbon-fiber fins—hydrofoils—that slice through the waves, raising the hull clean out of the sea. Underwater, a torpedo-shaped, battery-powered propeller mounted along the rear foil thrusts all 1.6 metric tons of the Candela C-8 forward.

The boat is out on the San Francisco Bay, zipping around in the sweet spot between two iconic Bay Area tourist destinations: the Golden Gate Bridge and Alcatraz Island. It is a sunny and balmy day, a fluke of Mediterranean conditions that’s out of character from the area’s typical vicious February chill.

Courtesy of Candela

The all-electric C-8 is a luxury watercraft engineered by Swedish maritime manufacturer Candela, but the battery is made by the EV company Polestar—a subsidiary of Geely, the Chinese conglomerate that also owns Volvo. While Polestar has focused the bulk of its efforts on land vehicles, it has partnered with Candela to give the C-8 its juice box. And it is the very same one—the boat uses the 69-kWh battery pack found in the Polestar 2. Volvo has a long history of making marine engines, but this is the first time its EV sister brand has got its feet wet. 

The C-Pod, a dual-motor cigar-shaped drive unit with contra-rotating propellers that’s been redesigned from the previous C-7 for more efficiency, launches the C-8 into a very different kind of boating experience, and is the key to the C-8’s abilities. 

With Great Power Comes Great Efficiency

One of the main challenges with direct drive electric motors is that the motor is spinning at the same speed as the propeller. Electric machines are typically most efficient at higher speeds, whereas propellers are most efficient at lower speeds. So Candela designed a drive unit that, instead of having one big motor and one big propeller delivering the torque needed, split the torque into two motors and two propellers. This means the size of both the motors and props could be reduced. A boon for a smaller propeller is the tip speed is reduced for the same rotational speed. That in turn means the C-8’s twin props can stay below the limit of cavitation (the cause of reduced performance, blade damage, vibration, and noise) even with a high rotational speed. 

Not stopping there, Candela also created a passive cooling system for the C-Pod using just the cold seawater on the outer surface of the housing—thus no need for rotating parts in the cooling system and no cooling fluids. Simpler, effective, and less to go wrong.

Such efficiency and innovative mechanics naturally drives performance. When the boat gets up to about 16 knots (that’s 18 mph for you landlubbers), it takes off, the bulk of it lifting out of the water, supported by the hydrofoils underneath. The vessel levels out, and you’re skimming above the waves, leaving barely a wisp of a wake behind. And with more of the boat out of the water, the less drag there is. The result feels more like a hovercraft than a traditional boat.

via Wired Top Stories https://www.wired.com

March 14, 2023 at 06:41AM

UFO-shaped clouds invade skies over Keck Observatory in Hawaii (photos)

https://www.space.com/ufo-shaped-lenticular-clouds-keck-observatory-hawaii


Observers spotted UFO look-alike clouds in Hawaiian skies above the Mauna Kea and Mauna Loa volcanoes. 

The photos were taken on March 8 from the vantage point of the W. M. Keck Observatory, which is located near the summit of the dormant volcano Mauna Kea in Hawaii. The photos capture lenticular clouds, which are usually created downwind of a hill or mountain as strong winds blow over and around rough terrain. 

“We spotted some UFOs today! Or rather, their doppelgangers. Check out these stunning photos several Keckies took of flying saucer-shaped lenticular clouds hovering near Maunakea and Mauna Loa. Did you see them too?” the observatory wrote on Twitter (opens in new tab) on March 8. 

Related: Keck Observatory: Twin Telescopes on Mauna Kea 

Lenticular clouds — scientifically known as altocumulus standing lenticularus — generally form in the troposphere, the lowest layer of Earth’s atmosphere, parallel to the wind direction, which gives them their otherworldly appearance. 

A lenticular cloud photographed from the W. M. Keck Observatory in Hawaii on March 8, 2023. (Image credit: R. Krejci, S. Yeh, A. Surendran, A. Rostopchina/W. M. Keck Observatory)

These clouds are fairly common over the western half of the mainland due to the Rockies but relatively rare in Hawaii, according to the National Weather Service (opens in new tab)

A lenticular cloud photographed from the W. M. Keck Observatory in Hawaii on March 8, 2023. (Image credit: R. Krejci, S. Yeh, A. Surendran, A. Rostopchina/W. M. Keck Observatory)

These strange looking clouds are sometimes mistaken for UFOs due to their smooth saucer-like shape. They formed near Mauna Loa and Mauna Kea, which reach above 13,000 feet (3,960 meters) in elevation, because strong winds are forced to flow over and around the peaks of the volcanoes. This, in turn, creates waves in the atmosphere just downwind of both of the summits. 

A lenticular cloud photographed from the W. M. Keck Observatory in Hawaii on March 8, 2023. (Image credit: R. Krejci, S. Yeh, A. Surendran, A. Rostopchina/W. M. Keck Observatory)

The photos were taken by employees at the observatory, including Rick Krejci, software engineer; Sherry Yeh, staff astronomer; Avinash Surendran, postdoctoral fellow; and Arina Rostopchina, observing assistant. 

Follow Samantha Mathewson @Sam_Ashley13 (opens in new tab).  Follow us @Spacedotcom (opens in new tab), or on Facebook (opens in new tab) and Instagram (opens in new tab). 

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

via Space https://ift.tt/TYB4DSy

March 15, 2023 at 08:13AM

Student-led ‘beach ball’ space antenna aims to boost cubesat communications

https://www.space.com/student-cubesat-beach-ball-antenna-catsat


Students at the University of Arizona have constructed a cubesat to demonstrate what they hope will be the answer to high-speed, low-cost space communication and data transmission for small satellites. 

The two-toned antenna, serving credence to the communication device’s beach ball comparison, will launch in a stowed and folded configuration. Once in Earth orbit, the antenna will inflate using a combination of helium and argon, increasing its initial surface area to provide increased downlink speeds. 

The cubesat — “CatSat,” as the students have dubbed it — will serve a dual purpose alongside its novel antenna test. Instruments opposite the beach ball antenna will probe Earth’s ionosphere to study the propagation and changes of high-frequency radio signals above the atmosphere. Together with other onboard components, CatSat’s instruments will send down high-resolution images of our planet at speeds previously unobtainable by comparably sized cubesats. 

Related: Cubesats: Tiny payloads, huge benefits for space research

Aman Chandra (left) and Shae Henley (right) are two of many University of Arizona students who have helped prepare CatSat for launch. Here, they carefully handle the system that will eject and inflate a large mylar membrane in space. The upper half of the ball-shaped antenna is reflective, so it can relay electromagnetic waves to Earth. (Image credit: TaMya Reliford, University of Arizona)

Hilliard Paige, University of Arizona systems engineering student and CatSat’s lead system engineer, sees the antenna concept as a pathfinder for future missions. “Following a successful launch, this inflatable antenna will be the first of its kind in space,” she said in an online post (opens in new tab) from the university.

“The technology demonstrated by CatSat opens the door to the possibility of future lunar, planetary and deep-space missions using cubesats,” echoed University of Arizona professor of astronomy Chris Walker. 

Through the University of Arizona’s commercialization efforts via Tech Launch Arizona, Walker co-founded a company called Freefall Aerospace, which developed the beach ball antenna. Walker was also one of the University of Arizona faculty to submit the initial CatSat proposal to NASA under the agency’s Cubesat Launch Initiative in 2019.

That proposal gained NASA’s approval, and CatSat was assigned a launch vehicle — a Firefly Aerospace Alpha rocket, which will lift off from Vandenberg Space Force Base in California and deliver the small satellite to a 340-mile-high (547 kilometers), sun-synchronous orbit. If successful, CatSat’s beach ball antenna will then beam down near real-time images of Earth.

CatSat does not yet have a target launch date, though it’s expected to get one later this year, University of Arizona officials said.

Follow us on Twitter @Spacedotcom (opens in new tab) or on Facebook (opens in new tab). 

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

via Space https://ift.tt/rITpOLY

March 16, 2023 at 03:13PM

AI Can Re-create What You See from a Brain Scan

https://www.scientificamerican.com/article/ai-can-re-create-what-you-see-from-a-brain-scan/


Functional magnetic resonance imaging, or fMRI, is one of the most advanced tools for understanding how we think. As a person in an fMRI scanner completes various mental tasks, the machine produces mesmerizing and colorful images of their brain in action.

Looking at someone’s brain activity this way can tell neuroscientists which brain areas a person is using but not what that individual is thinking, seeing or feeling. Researchers have been trying to crack that code for decades—and now, using artificial intelligence to crunch the numbers, they’ve been making serious progress. Two scientists in Japan recently combined fMRI data with advanced image-generating AI to translate study participants’ brain activity back into pictures that uncannily resembled the ones they viewed during the scans. The original and re-created images can be seen on the researchers’ website.

“We can use these kinds of techniques to build potential brain-machine interfaces,” says Yu Takagi, a neuroscientist at Osaka University in Japan and one of the study’s authors. Such future interfaces could one day help people who currently cannot communicate, such as individuals who outwardly appear unresponsive but may still be conscious. The study was recently accepted to be presented at the 2023 Conference on Computer Vision and Pattern Recognition.

The study has made waves online since it was posted as a preprint (meaning it has not yet been peer-reviewed or published) in December 2022. Online commentators have even compared the technology to “mind reading.” But that description overstates what this technology is capable of, experts say.

“I don’t think we’re mind reading,” says Shailee Jain, a computational neuroscientist at the University of Texas at Austin, who was not involved in the new study. “I don’t think the technology is anywhere near to actually being useful for patients—or to being used for bad things—at the moment. But we are getting better, day by day.”

The new study is far from the first that has used AI on brain activity to reconstruct images viewed by people. In a 2019 experiment, researchers in Kyoto, Japan, used a type of machine learning called a deep neural network to reconstruct images from fMRI scans. The results looked more like abstract paintings than photographs, but human judges could still accurately match the AI-made images to the original pictures.

Neuroscientists have since continued this work with newer and better AI image generators. In the recent study, the researchers used Stable Diffusion, a so-called diffusion model from London-based start-up Stability AI. Diffusion models—a category that also includes image generators such as DALL-E 2—are “the main character of the AI explosion,” Takagi says. These models learn by adding noise to their training images. Like TV static, the noise distorts the images—but in predictable ways that the model begins to learn. Eventually the model can build images from the “static” alone.

Released to the public in August 2022, Stable Diffusion has been trained on billions of photographs and their captions. It has learned to recognize patterns in pictures, so it can mix and match visual features on command to generate entirely new images. “You just tell it, right, ‘A dog on a skateboard,’ and then it’ll generate a dog on a skateboard,” says Iris Groen, a neuroscientist at the University of Amsterdam, who was not involved in the new study. The researchers “just took that model, and then they said, ‘Okay, can we now link it up in a smart way to the brain scans?’”

The brain scans used in the new study come from a research database containing the results of an earlier study in which eight participants agreed to regularly lay in an fMRI scanner and view 10,000 images over the course of a year. The result was a huge repository of fMRI data that shows how the vision centers of the human brain (or at least the brains of these eight human participants) respond to seeing each of the images. In the recent study, the researchers used data from four of the original participants.

To generate the reconstructed images, the AI model needs to work with two different types of information: the lower-level visual properties of the image and its higher-level meaning. For example, it’s not just an angular, elongated object against a blue background—it’s an airplane in the sky. The brain also works with these two kinds of information and processes them in different regions. To link the brain scans and the AI together, the researchers used linear models to pair up the parts of each that deal with lower-level visual information. They also did the same with the parts that handle high-level conceptual information.

“By basically mapping those to each other, they were able to generate these images,” Groen says. The AI model could then learn which subtle patterns in a person’s brain activation correspond to which features of the images. Once the model was able to recognize these patterns, the researchers fed it fMRI data that it had never seen before and tasked it with generating the image to go along with it. Finally, the researchers could compare the generated image to the original to see how well the model performed.

Many of the image pairs the authors showcase in the study look strikingly similar. “What I find exciting about it is that it works,” says Ambuj Singh, a computer scientist at the University of California, Santa Barbara, who was not involved in the study. Still, that doesn’t mean scientists have figured out exactly how the brain processes the visual world, Singh says. The Stable Diffusion model doesn’t necessarily process images in the same way the brain does, even if it’s capable of generating similar results. The authors hope that comparing these models and the brain can shed light on the inner workings of both complex systems.

As fantastical as this technology may sound, it has plenty of limitations. Each model has to be trained on, and use, the data of just one person. “Everybody’s brain is really different,” says Lynn Le, a computational neuroscientist at Radboud University in the Netherlands, who was not involved in the research. If you wanted to have AI reconstruct images from your brain scans, you would have to train a custom model—and for that, scientists would need troves of high-quality fMRI data from your brain. Unless you consent to laying perfectly still and concentrating on thousands of images inside a clanging, claustrophobic MRI tube, no existing AI model would have enough data to start decoding your brain activity.

Even with those data, AI models are only good at tasks for which they’ve been explicitly trained, Jain explains. A model trained on how you perceive images won’t work for trying to decode what concepts you’re thinking about—though some research teams, including Jain’s, are building other models for that.

It’s still unclear if this technology would work to reconstruct images that participants have only imagined, not viewed with their eyes. That ability would be necessary for many applications of the technology, such as using brain-computer interfaces to help those who cannot speak or gesture to communicate with the world.

“There’s a lot to be gained, neuroscientifically, from building decoding technology,” Jain says. But the potential benefits come with potential ethical quandaries, and addressing them will become still more important as these techniques improve. The technology’s current limitations are “not a good enough excuse to take potential harms of decoding lightly,” she says. “I think the time to think about privacy and negative uses of this technology is now, even though we may not be at the stage where that could happen.”

via Scientific American https://ift.tt/WmEjFHK

March 17, 2023 at 10:15AM

Here’s the Real Story behind the Massive ‘Blob’ of Seaweed Heading toward Florida

https://www.scientificamerican.com/article/heres-the-real-story-behind-the-massive-blob-of-seaweed-heading-toward-florida/


A loose raft of brown seaweed spanning about twice the width of the U.S. is inching across the Caribbean. Currently, bucketloads of the buoyant algae are washing up on beaches on the eastern coast of Florida earlier in the year than usual, raising scientists’ concerns for what coming months will bring.

The seaweed is made up of algal species in the genus Sargassum. These species grow as a mat of glops of algae that stay afloat via little air-filled sacs attached to leafy structures. The algae form a belt between the Caribbean and West Africa in the Sargasso Sea in the North Atlantic Ocean and then ride the currents west. Scientists say that reports of a massive blob of seaweed slamming into coastlines are overblown because the Sargassum algae are scattered across the ocean, and much of the seaweed will never reach the coast’s sandy shores. But in recent years researchers have generally seen larger so-called Sargassum blooms. And once the seaweed begins washing up on beaches and rotting, it can cause serious problems, local communities say.

Among annual Sargassum censuses in the Atlantic Ocean, “2018 was the record year, and we’ve had several big years since,” says Brian Lapointe, an oceanographer at Florida Atlantic University, who has studied seaweed for decades. “This is the new normal, and we’re going to have to adapt to it.”

The seaweed “blob” has been dubbed the Great Atlantic Sargassum Belt, and though it’s sprawling, the algae in the belt cover only about 0.1 percent of the water’s surface, says Chuanmin Hu, an oceanographer at the University of South Florida, who has used satellites to study Sargassum for nearly 20 years.

Hu and his colleagues use data collected by NASA satellites, including Terra and Aqua, to estimate the total mass of Sargassum in the Atlantic every month, tracking a yearly cycle that typically peaks in June. Last year the seaweed broke the record for the highest amount ever recorded in the Atlantic, with some 22 million metric tons of the stuff found across the ocean, according to the team’s calculations.

Hu says the team estimated that the Atlantic contained about six million metric tons of Sargassum in February and that he’s confident March’s mass will be higher. “This month there should be more. There’s no doubt,” Hu says. “Even in the first two weeks, I have seen increased amounts.”

In the ocean, Hu says, the Sargassum is crucial habitat for fish and turtles, among other marine life. He calls the belt a “moving ecosystem.” And just a small portion of the seaweed present in the Atlantic will ever wash up on beaches, Hu adds.

But beaches in Fort Lauderdale and the Florida Keys are already reporting Sargassum deposits this year, Lapointe says, and it’s on beaches that the seaweed can be problematic. There, he says, the algae rot and release chemicals such as hydrogen sulfide gas, which smells like rotten eggs. When inhaled, the gas can also cause headaches and irritate a person’s eyes, nose and throat. People with asthma or other breathing problems may be more sensitive to the effect, according to the Florida Department of Health. The seaweed’s early arrival is raising concerns about what this summer might bring.

“This is pretty early in the Sargassum season to see that much coming in, so I think that’s also fueling some of the concern about what’s to come,” Lapointe says.

Hu says that Sargassum amounts can’t be forecast more than two or three months out, so this year’s seasonal peak in the summer is still too distant to predict. Researchers have expected this year might turn out to be heavy in seaweed, however, because even the winter lull saw higher amounts of the stuff than average.

And the Atlantic has been reliably producing much more Sargassum in recent decades than it has historically. Lapointe says that the high Sargassum levels of recent years are likely in part tied to nutrient-rich water running off land into rivers and out to the oceans, where it can fertilize the seaweed. But understanding and addressing the problem remains knotty, he adds.

“This has been going for over 10 years now, and we haven’t made a whole lot of progress in better understanding of all these nutrient and climate drivers,” he says. “It’s something we’re working on as scientists.”

via Scientific American https://ift.tt/WmEjFHK

March 17, 2023 at 12:15PM