Steam has implemented some new guidelines regarding the use of AI in video games being sold on its platform. Developers will be required to disclose how AI is being used in them.
Steam updated its Content Survey that developers fill out when submitting their games to Steam. In a newly added AI disclosure section, developers are required to describe how AI is being used in both the development and execution of their games.
The use is separated into two categories: pre-generated and live-generated. Pre-generated AI use refers to content such as art, code, and sound, created with AI tools during development. During Valve’s pre-release review, the company will evaluate the output of AI-generated content the same it does with non-AI content. Since all developers are beholden to Steam’s Distribution Agreement, Valve will also check their games for illegal or infringing content, and whether they are consistent with marketing materials.
For live-generated content, it refers to content that uses AI tools while the game is running. The rules for pre-generated AI use are also applied to live-generated content, but with one more requirement. Developers will need to describe in the Content Survey what kind of measures they will take to make sure that the AI isn’t generating any illegal content while the game is running.
Steam is also releasing a new system where players can report any illegal content that they encounter when playing games with live-generated AI content. This system comes in the form of an in-game overlay, making it easy and convenient for players to report something that should have been caught by the measures the developer set up regarding live-generated AI content.
AI has been a hotly debated topic within the video games industry, with developers fearing that its use could cost them jobs. Some companies, such as Square Enix, are embracing the use of AI.
Got a news tip or want to contact us directly? Email news@gamespot.com
When AI researcher Melanie Mitchell published Artificial Intelligence: A Guide for Thinking Humans in 2019, she set out to clarify AI’s impact. A few years later, ChatGPT set off a new AI boom—with a side effect that caught her off guard. An AI-generated imitation of her book appeared on Amazon, in an apparent scheme to profit off her work. It looks like another example of the ecommerce giant’s ongoing problem with a glut of low-quality AI-generated ebooks.
Mitchell learned that searching Amazon for her book surfaced not only her own tome but also another ebook with the same title, published last September. It was only 45 pages long and it parroted Mitchell’s ideas in halting, awkward language. The listed author, “Shumaila Majid,” had no bio, headshot, or internet presence, but clicking on that name brought up dozens of similar books summarizing recently published titles.
Mitchell guessed the knock-off ebook was AI-generated, and her hunch appears to be correct. WIRED asked deepfake-detection startup Reality Defender to analyze the ersatz version of Artificial Intelligence: A Guide for Thinking Humans, and its software declared the book 99 percent likely AI-generated. “It made me mad,” says Mitchell, a professor at the Santa Fe Institute. “It’s just horrifying how people are getting suckered into buying these books.”
Amazon took down the imitation of Mitchell’s book after WIRED contacted the company. “While we allow AI-generated content, we don’t allow AI-generated content that violates our Kindle Direct Publishing content guidelines, including content that creates a disappointing customer experience,” Amazon spokesperson Ashley Vanicek says.
Unlike the takeoff of Mitchell’s book, the summaries of Li’s announce themselves as such. One, forthrightly titled Summary and Analysis of The Worlds I See, has a product description that begins: “DISCLAIMER!! THIS IS NOT A BOOK BY FEI-FEI LI, NOR IS IT AFFILIATED WITH THEM.IT IS AN INDEPENDENT PUBLICATION THAT SUMMARIZES FEI-FEI LI BOOK IN DETAILS.IT IS A SUMMARY.” Yet these books, too, appear to be AI-generated and to add little value for readers. Reality Defender analyzed a sample of the Summary and Analysis book and found it was also likely AI-generated. “A complete and total rewriting of the text. Like, someone queried an LLM to rewrite the text, not summarize it,” Reality Defender head of marketing Scott Steinhardt says. “It’s like a KidzBop version of the real thing.” Reached for comment over email, Li distilled her reaction into a single emoji: ?.
Summary Execution
Sleazy book summaries have been a long-running problem on Amazon. In 2019, The Wall Street Journalfound that many used deliberately confusing cover art and text, irking writers including entrepreneur Tim Ferriss. “We, along with some of the publishers, have been trying to get these taken down for some time now,” says Authors Guild CEO Mary Rasenberger. The rise of generative AI has supercharged the spammy summary industry. “It is the first market we expected to see inundated by AI,” Rasenberger says. She says these schemes fit the strengths of large language models, which are passable at producing summaries of work they’re fed, and can do it fast. The fruits of this rapid-fire generation are now common in searches for popular nonfiction titles on Amazon.
AI-generated summaries sold as ebooks have been “dramatically increasing in number, says publishing industry expert Jane Friedman—who was herself the target of a different AI-generated book scheme. That’s despite Amazon in September limiting authors to uploading a maximum of three books to its store each day. “It’s common right now for a nonfiction author to celebrate the launch of their book, then within a few days discover one of these summaries for sale.”
Is AI the future of gaming? I can certainly see the possibilities.
NVIDIA Avatar Cloud Engine (ACE) is a suite of technologies that help developers bring digital avatars to life with generative AI.
With our partner Convai, the NVIDIA Kairos demo evolves to take on a number of new features and incorporate the latest NVIDIA ACE microservices: Audio2Face and Riva ASR for AI powered animation and speech. Convai’s platform features a set of tools and APIs to create character personas and enable dynamic conversations. The latest features from Convai enable real time character to character interaction, scene perception, and actions.
LAS VEGAS — This is the eVTOL Flying Car from XPeng Aeroht, which as the name and eight carbon fiber whiring blades would suggest, is a flying car concept (among others from this CES and past ones). All those blades fold into the body, allowing you to drive down the street without taking anybody’s head off. And to be clear, that would absolutely happen. They’re basically at neck level for almost every adult.
The “dual-mode cockpit” seats two only since the rest of the body is filled with stored blades (or the cavity of those blades when they’re in use). There is both a square wheel/yoke-like thing in front of the driver/pilot, plus a joystick in the center.
Propulsion is electric, but that’s the extent of details about that. This is a concept after all. XPeng Aeroht claims to be the largest flying car company in Asia. It has another, slightly more realistic concept: the Modular Flying Car. It consists of the Ground Module, a futuristic van with a rear end that opens up, GMC Envoy XUV style, to reveal a tiny two-person helicopter dubbed the Air Module. The Ground Module features 6×6 all-wheel-drive, rear-wheel steering (that would be fun to see with four rear wheels) and like all XPeng “creations,” is electric. The Air Module is capable of manual and automatic operation, and not surprisingly given the helicopter blades, is capable of vertical takeoff and landing.
XPeng says it is “dedicated to producing the safest intelligent electric flying car for personal use.” At very least, they’ve created some really cool stuff that wouldn’t look out of a place in a sci-fi movie.
One of Xi’s public displays of cracking down on corruption included seizing more than 100 high-end cars used by officials. Kevin Frayer/Getty Images
A surge in Russian demand has made China the world’s top car exporter, per the Wall Street Journal.
Chinese brands have flooded the Russian market since the war in Ukraine began.
European, Japanese, and Korean car brands meanwhile have largely left Russia.
China’s auto industry appears to have emerged as a beneficiary of Russia’s ongoing war with Ukraine.
Since Moscow invaded, European, Japanese, and other nations’ vehicle brands have left the Russian market, though China has remained — and surging demand there has helped it notch a record year and surpass Japan as the world’s top auto exporter, according to a Wall Street Journalreport.
In 2023, China sold at least five times as many cars in Russia than the 160,000 seen in 2022, data from the China Passenger Car Association showed, per the Journal.
On Tuesday, the group estimated that 5.26 million China-made vehicles were sold overseas over the last 12 months, about a million more than Japan’s automakers.
Russia specifically accounted for about 800,000 of the 2 million additional vehicles China exported in 2023, the Journal reported.
China’s top auto company Chery saw a boom in sales to Russia over that stretch, sending 900,000 cars overseas in total. Automakers Geely and Great Wall Motor similarly saw a sharp uptick in car sales to Russia.
At the same time, domestic demand in China also bounced back in 2023 with its electric-vehicle sector fueling its strongest growth in several years, the car association said. Chinese Tesla rival BYD, for instance, beat out Elon Musk’s company as the world’s top EV seller in the most recent quarter.
In the aftermath of the invasion, purchases of foreign-made cars in Russia neared a standstill, according to a July report from Yale researchers. A combination of soaring prices, weak consumer sentiment, and dwindling supply has sent domestic car sales crashing to roughly a quarter of pre-war times.
"Russians are just buying less cars, period," researcher Steven Tian said in an interview with Business Insider at the time. "That speaks to the weakness of the consumer in Russia. This is as close to a proxy to deteriorating consumer sentiment as there is, and the story it tells is profoundly distressing. Russians just aren’t spending money."
Between the ASUS ROG Ally, the Lenovo Legion Go and the Steam Deck, AMD has a virtual monopoly over the chips powering high-end gaming handhelds. But for the Claw, MSI is partnering up with Intel to bring a little more balance to the portable PC performance wars.
On paper and in its design, MSI’s Claw shares a lot with the ROG Ally. It has a 7-inch full HD LCD screen with 500 nits of brightness and a 120Hz refresh rate. (I asked an MSI rep if the Claw also supports VRR, but they didn’t have an immediate answer, so stay tuned.) Even its case looks very familiar, with both handhelds sporting almost identical chassis, button layouts and power buttons with built-in fingerprint sensors, except that the Claw is black and has much bigger grips, which makes it way more comfortable to hold.
But that’s where the similarities come to an end, because on the inside, the Claw is powered by either an Intel Core Ultra 7 or Core Ultra 5 chip depending on the configuration. That’s a pretty big departure amongst the sea of AMD-based alternatives, and may have some people wondering if Intel’s first foray into high-end gaming handhelds can keep up. That’s because in addition to a new chip, developers will be relying on Intel’s integrated Arc graphics and a library of drivers that simply aren’t as deep or as well tested as AMD’s. It’s also unclear how much the NPU inside Intel’s latest chip will help with things like XeSS super sampling, which is sure to play a big part in the Claw’s capabilities.
Photo by Sam Rutherford/Engadget
However, even on the pre-production models with unfinished software (including beta drivers) that I tested things were surprisingly smooth. Launching games was snappy and I only ran into a small handful of hitches. Unfortunately, I wasn’t able to pull up MSI’s built-in performance monitor, as its MSI Center game launcher is still a work in progress. A spokesperson I talked to claimed that, during internal testing, the Claw delivered 20 to 25 percent higher frame rates than an equivalent AMD-based handheld in 14 out of 15 popular titles. That’s a pretty big claim but, if those figures carry over to a larger library of modern games, AMD might soon find itself playing catch-up. But, that’s a big if.
Another benefit of going with an Intel chip is that it allows MSI to include a Thunderbolt 4 port (Thunderbolt is a proprietary connector owned by Intel), which brings super fast data speeds and the option of hooking up an external graphics dock if you want even more performance. MSI is even using one of Intel’s Killer modems that includes support for Wi-Fi 7 and Bluetooth 5.4, so wireless connectivity is pretty much as good as it gets.
Photo by Sam Rutherford/Engadget
Also, while I didn’t have enough time to test its longevity, the 53Whr battery should give the Claw some significant advantages over the ROG Ally, which has just a40Whr pack. There’s huge mesh vents on its back too, which should help keep MSI’s handheld and your hands from getting too sweaty. Also, both the Claw’s buttons and joysticks use precise hall effect sensors, compared to the Ally whose sticks relies on potentiometers. In a lot of ways, the Claw feels like what a mid-life refresh for the Ally might look like, assuming ASUS felt like switching from AMD to Intel.
Even this early there’s a lot to like about MSI’s new Intel-based handheld. And when you factor in that the Claw starts at $699 with a Core Ultra 5 chip, 16GB of RAM and 512GB of storage, $749 for a faster model with a Core Ultra 7 CPU or $799 for one with a 1TB SSD, it looks to be pretty competitive regarding pricing as well.
Photo by Sam Rutherford/Engadget
Unfortunately, there’s no word on an official release, though MSI says it’s shooting for a window closer to the end of Q1 instead of Q2. And as someone who loved the huge wave of gaming handhelds we got last year, it’s really encouraging to see MSI carry that momentum into 2024 with the Claw.
This article originally appeared on Engadget at https://ift.tt/4ugriEN
OpenAI and its biggest backer, Microsoft, are facing several lawsuits accusing them of using other people’s copyrighted works without permission to train the former’s large language models (LLMs). And based on what OpenAI told the House of Lords Communications and Digital Select Committee, we might see more lawsuits against the companies in the future. It would be "impossible to train today’s leading AI models without using copyrighted materials," OpenAI wrote in its written evidence (PDF) submission for the committee’s inquiry into LLMs, as first reported by the The Guardian.
The company explained that it’s because copyright today "covers virtually every sort of human expression — including blog posts, photographs, forum posts, scraps of software code, and government documents." It added that "[l]imiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens." OpenAI also insisted that it complies with copyright laws when it trains its models. In a new post on its blog made in response to the The New York Times’ lawsuit, it said the use of publicly available internet materials to train AI falls under fair use doctrine.
It admitted, however, that there is "still work to be done to support and empower creators." The company talked about the ways it’s allowing publishers to block the GPTBot web crawler from being able to access their websites. It also said that it’s developing additional mechanisms allowing rightsholders to opt out of training and that it’s engaging with them to find mutually beneficial agreements.
In some of the lawsuits filed against OpenAI and Microsoft, the plaintiffs accuse the companies of refusing to pay authors for their work while building a billion-dollar industry and enjoying enormous financial gain from copyrighted materials. The more recent case filed by a couple of non-fiction authors argued that the companies could’ve explored alternative financing options, such as profit sharing, but have "decided to steal" instead.
OpenAI didn’t address those particular lawsuits, but it did provide a direct answer to The New York Times’ complaint that accuses it of using its published news articles without permission. The publication isn’t telling the full story, it said. It was already negotiating with The Times regarding a "high-value partnership" that would give it access to the publication’s reporting. The two parties were apparently still in touch until December 19, and OpenAI only found out about the lawsuit on December by reading about it on The Times.
In the complaint filed by the newspaper, it cited instances of ChatGPT providing users with "near-verbatim excerpts" from paywalled articles. OpenAI accused the publication of intentionally manipulating prompts, such as including lengthy excerpts of articles in its interaction with the chatbot to get it to regurgitate content. It’s also accusing The Times of cherry picking examples from many attempts. OpenAI said the lawsuit filed by The Times has no merit, but it’s still hopeful for a "constructive partnership" with the publication.
This article originally appeared on Engadget at https://ift.tt/S8acpqC