An AI that can play Goat Simulator is a step towards more useful AI

https://www.technologyreview.com/2024/03/13/1089764/an-ai-that-can-play-goat-simulator-is-a-step-towards-more-useful-ai/

Fly, goat, fly! A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. Researchers were able to get it to follow text commands to play seven different games and move around in three different 3D research environments. It’s a step towards more generalized AI that can transfer skills across multiple environments.  

Google Deepmind has had huge success developing game-playing AI systems. Its system AlphaGo, which beat top professional player Lee Sedol at the game Go in 2016, was a major milestone that showed the power of deep learning. But unlike earlier game-playing AI systems, which mastered only one game, or could only follow single goals or commands, this new agent is able to play in a variety of different games, including Valheim and No Man’s Sky.

Training AI systems in games is a good proxy for real-world tasks. “A general game-playing agent could, in principle, learn a lot more about how to navigate our world than anything in a single environment ever could,” says Michael Bernstein, an associate professor of computer science at Stanford University, who was not part of the research. 

“One could imagine one day rather than having superhuman agents which you play against, we could have agents like SIMA playing alongside you in games with you and with your friends,” says Tim Harley, a research engineer at Google DeepMind who was part of the team that developed the agent, called SIMA (Scalable, Instructable, Multiworld Agent). 

The Google DeepMind team trained SIMA on lots of examples of humans playing video games, both individually and collaboratively, alongside keyboard and mouse input and annotations of what the players did in the game, says Frederic Besse, a research engineer at Google DeepMind.  

They then used an AI technique called imitation learning to teach the agent to play games as humans would. SIMA can follow 600 basic instructions, such as “turn left,” “climb the ladder” and “open the map” which can be completed in less than approximately 10 seconds.

The team found that a SIMA agent that was trained on many games was better than an agent that learned how to play just one. This is because it was able to take advantage of the shared concepts between games to learn better skills and to be better at carrying out instructions, says Besse. 

“This is again a really exciting key property as we have an agent that can play games it has never seen before essentially,” says Besse. 

Seeing this sort of knowledge transfer between games is a significant milestone for AI research, says Paulo Rauber, a lecturer in artificial Intelligence at Queen Mary University of London. 

The basic idea of learning to execute instructions based on examples provided by humans could lead to more powerful systems in the future, especially with bigger datasets, Rauber says. SIMA’s relatively limited dataset is what is holding back its performance, he says. 

Although the number of game environments it’s been trained on is still small, SIMA is on the right track for scaling up, says Jim Fan, a senior research scientist at NVIDIA who runs its  AI Agents Initiative. 

But the AI system is still not close to human-level, says Harley from Google DeepMind. For example, in the game No Man’s Sky, the AI agent could do just 60% of the tasks humans could do. And when the researchers removed the ability for humans to give SIMA instructions, they found the agent performed much worse than before. 

Next, Besse says the team is working on improving the agent’s performance. They want to get the AI system to work in as many environments as possible and learn new skills, and allow people to be able to chat and get a response from the AI agent. The team also wants SIMA to have more generalized skills, allowing it to pick up games it has never seen before quickly, much like a human can. 

Humans “can generalize very well to unseen environments and unseen situations. And we want our agents to be just the same,” says Besse. 

SIMA inches us closer towards a “ChatGPT moment” for autonomous agents, says Roy Fox, an assistant professor at the University of California, Irvine.  

But this is a long way away from actual autonomous AI. That would be “a whole different ball game,” he says. 

via Technology Review Feed – Tech Review Top Stories https://ift.tt/tRr0pQX

March 13, 2024 at 09:17AM