Google Pauses Gemini’s Image Generator After It Was Accused of Being Racist Against White People

https://gizmodo.com/google-pauses-gemini-ai-image-generator-white-racism-1851277547

Google is pausing the ability of its AI chatbot Gemini to generate images of people after a slew of historically inaccurate and racially diverse creations, such as images of Black Nazi soldiers and a Black medieval British king, went viral and led some to say the company was racist against white people.

Google’s Antitrust Case Is the Best Thing That Ever Happened to AI

“We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a statement on X, formerly known as Twitter, on Thursday morning. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

The company initially addressed the controversy, which caused wide outrage among the anti-woke crowd, on Wednesday and said it was taking immediate action to fix the issues. In a statement, Google said that while Gemini’s ability to generate diverse images of people was “generally a good thing,” a nod to the challenges AI image generators have had at creating images of people of color, Gemini was “missing the mark here.”

Gizmodo confirmed that Gemini’s ability to generate images had been disabled in some regions, such as Europe, on Thursday morning. (The Verge was still able to generate images with Gemini using a U.S. VPN). When asked to recreate, for example, an image of a Nazi-era German soldier, the chatbot declined.

“I can’t create images yet so I’m not able to help you with that,” Gemini replied.

However, that doesn’t seem to be boilerplate response. I also tried to ask Gemini to recreate a “historically accurate depiction of a medieval British king,” a prompt generated led to images of Black and Indigenous people on Wednesday.

In this case, Gemini said that while it couldn’t directly create images, it could “certainly guide you towards creating a historically accurate image of a medieval British king!” by providing information on descriptions of kings based on historical accounts and information on types of clothing and hairstyles, among details.

Notably, though, Gemini added that it was important to consider artistic freedom when it came to creating images—which is one of the things that got it in trouble in the first place.

“Remember, historical accuracy is important, but artistic freedom can also add depth and interest to your image,” Gemini said.

via Gizmodo https://gizmodo.com

February 22, 2024 at 06:24AM

Reddit Signs $60 Million Deal to Scrape Your Online Community for AI Parts: Report

https://gizmodo.com/reddit-signs-deal-scrape-your-online-community-ai-parts-1851270475

Reddit reportedly signed a $60 million deal with a “large AI company” to allow its online communities to be scraped for AI training data, according to Bloomberg on Friday. The unnamed AI company will sift through millions of posts on Reddit, and train a large language model on Reddit’s threads.

Reddit Knowingly Downvoting Self | Future Tech

Reddit is reportedly weighing an IPO with a $5 billion valuation, despite only bringing in $800 million in revenue last year. Reddit is not profitable but has a rich valuation because its online communities offer a perfect training ground for AI models. However, licensing out your user base’s thoughts and ideas is not always reciprocated well. The most popular subreddits went dark in protest last year after users took issue with the company charging for access to its application programming interface (API), first announced in April of 2023.

Reddit’s reported deal with an “unnamed large AI company” is exactly what the platform has been looking for. Big Tech is hungry for data, and that has turned legacy news organizations, community forums, and even the University of Michigan into mere content farms. These deals, though upsetting to users, offer Reddit a path to profitability.

“The Reddit corpus of data is really valuable,” said Reddit CEO Steve Huffman to The New York Times in April. “But we don’t need to give all of that value to some of the largest companies in the world for free.”

But when Reddit started charging for API access, it didn’t just charge big companies, it also started charging small, independent researchers. This shift made it more difficult for Reddit’s moderators to manage their communities, and some argued it made for a worse experience for Reddit’s 800 million monthly active users.

“We believe that the longevity and success of this platform rest on preserving the rich ecosystem that has developed around it,” said Reddit moderators in a collective letter from last June. “The potential loss of these services due to the pricing change would significantly impact our ability to moderate efficiently, thus negatively affecting the experience for users in our communities.”

Reddit did not immediately respond to Gizmodo’s request for comment.

Apple was exploring $50 million AI deals with The New York Times, Condé Nast, and other news publishers in December. Shutterstock is also licensing its human-made content to OpenAI for training on its models. Twitter, Instagram, and YouTube have also become increasingly valuable in recent years, as they’re now seen as content gold mines.

The platform also introduced ads in recent years and made it impossible for users to opt out of seeing advertiser content in 2023. As Reddit becomes a public company, there’s a growing concern from users that management will hurt the thriving community forum it has built.

There’s also a bigger concern about how AI companies are licensing data. Content platforms are signing million-dollar licensing agreements with AI companies, but the actual people who created this content aren’t getting a thing. Meanwhile, AI threatens to replace content creators in the editorial, graphic design, and film industries.

via Gizmodo https://gizmodo.com

February 20, 2024 at 09:45AM

Secret Mathematical Patterns Revealed in Bach’s Music

https://www.scientificamerican.com/article/secret-mathematical-patterns-revealed-in-bachs-music/

Baroque German composer Johann Sebastian Bach produced music that is so scrupulously structured that it’s often compared to math. Although few among us are emotionally affected by mathematics, Bach’s works—and music in general—moves us. It’s more than sound; it’s a message. And now, thanks to tools from information theory, researchers are starting to understand how Bach’s music gets that message across.

By representing scores as simple networks of dots, called nodes, connected by lines, called edges, scientists quantified the information conveyed by hundreds of Bach’s compositions. An analysis of these musical networks published on February 2 in Physical Review Research revealed that Bach’s many musical styles, such as chorales and toccatas, differed markedly in how much information they communicated—and that the musical networks contained structures that could make their messages easier for human listeners to understand.

“I just found the idea really cool,” says physicist Suman Kulkarni of the University of Pennsylvania, lead author of the new study. “We used tools from physics without making assumptions about the musical pieces, just starting with this simple representation and seeing what that can tell us about the information that is being conveyed.”


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Researchers quantified the information content of everything from simple sequences to tangled networks using information entropy, a concept introduced by mathematician Claude Shannon in 1948.

As its name suggests, information entropy is mathematically and conceptually related to thermodynamic entropy. It can be thought of as a measure of how surprising a message is—where a “message” can be anything that conveys information, from a sequence of numbers to a piece of music. That perspective may feel counterintuitive, given that, colloquially, information is often equated with certainty. But the key insight of information entropy is that learning something you already know isn’t learning at all.

A conversation with a person who can only ever say one thing, such as the character Hodor in the television series Game of Thrones, who only says “Hodor,” would be predictable but uninformative. A chat with Pikachu would be a bit better; the Pokémon can only say the syllables in its name, but it can rearrange them, unlike Hodor. Likewise, a musical piece with just one note would be relatively easy for the brain to “learn,” or accurately reproduce as a mental model, but the piece would struggle to get any kind of message across. Watching a coin flip with a double-headed coin would yield no information at all.

Of course, packing a message full of information isn’t much good if whatever—or whoever—receives it can’t accurately understand that information. And when it comes to musical messages, researchers are still working out how we learn what music is trying to tell us.

“There are a few different theories,” says cognitive scientist Marcus Pearce of Queen Mary University of London, who wasn’t involved in the recent Physical Review Research study. “The main one, I think, at the moment, is based on probabilistic learning.”

In this framework, “learning” music means building up accurate mental representations of the real sounds we hear—what researchers call a model—through an interplay of anticipation and surprise. Our mental models predict how likely it is that a given sound will come next, based on what came before. Then, Pearce says, “you find out whether the prediction was right or wrong, and then you can update your model accordingly.”

Kulkarni and her colleagues are physicists, not musicians. They wanted to use the tools of information theory to scour music for informational structures that could have something to do with how humans glean meaning from melody.

So Kulkarni boiled down 337 Bach compositions into webs of interconnected nodes and calculated the information entropy of the resulting networks. In these networks, each note of the original score is a node, and each transition between notes is an edge. For example, if a piece included an E note followed by a C and a G played together, the node representing E would be connected to the nodes representing C and G.

Networks of note transitions in Bach’s music packed more of an informational punch than randomly generated networks of the same size—the result of greater variation in the networks’ nodal degrees, or the number of edges connected to each node. Additionally, the scientists uncovered variation in the information structure and content of Bach’s many compositional styles. Chorales, a type of hymn meant to be sung, yielded networks that were relatively sparse in information, though still more information-rich than randomly generated networks of the same size. Toccatas and preludes, musical styles that are often written for keyboard instruments such as the organ, harpsichord and piano, had higher information entropy.

“I was particularly excited by the higher levels of surprise in the toccatas than in the chorale works,” says study co-author and physicist Dani Bassett of the University of Pennsylvania. “These two sorts of pieces feel different in my bones, and I was interested to see that distinction manifest in the compositional information.”

Network structures in Bach’s compositions might also make it easier for human listeners to learn those networks accurately. Humans don’t learn networks perfectly. We have biases, Bassett says. “We kind of ignore some of the local information in favor of seeing the bigger informational picture across the entire system,” they add. By modeling this bias in how we build our mental models of complex networks, the researchers compared the total information of each musical network to the amount of information a human listener would glean from it.

The musical networks contained clusters of note transitions that might help our biased brains “learn” the music—to reproduce the music’s informational structure accurately as a mental model—without sacrificing much information.

“The particular kind of way in which they capture learnability is pretty interesting,” says Peter Harrison of the University of Cambridge, who wasn’t involved in the study. “It’s very reductive in a certain sense. But it’s quite complementary to other theories we have out there, and learnability is a pretty hard thing to get a handle on.”

This type of network analysis isn’t particular to Bach—it could work for any composer. Pearce says it would be interesting to use the approach to compare different composers or look for informational trends through music history. For her part, Kulkarni is excited to analyze the informational properties of scores from beyond the Western musical tradition.

Music isn’t just a sequence of notes, though, Harrison notes. Rhythm, volume, instruments’ timbre—these elements and more are important dimensions of the musical messages that weren’t considered in this study. Kulkarni says she’d be interested in including these aspects of music in her networks. The process could also work the other way, Harrison adds: rather than boiling musical features down to a network, he’s curious how network features translate to things that a musician would recognize.

“A musician would say, ‘What are the actual musical rules, or the musical characteristics, that are driving this? Can I hear this on a piano?’” Harrison says.

Finally, it’s not yet clear how, exactly, the network patterns identified in the new study translate into the lived experience of listening to a Bach piece—or any music, Pearce says. Settling that will be a matter for music psychology, he continues. Experiments could reveal “if, actually, those kinds of things are perceivable by people and then what effects they have on the pleasure that people have when they’re listening to music.” Likewise, Harrison says he’d be interested in experiments testing whether the types of network-learning mistakes the researchers modeled in this study are actually important for how people learn music.

“The fact that humans have this kind of imperfect, biased perception of complex informational systems is critical for understanding how we engage in music,” Bassett says. “Understanding the informational complexity of Bach’s compositions opens new questions regarding the cognitive processes that underlie how we each appreciate different sorts of music.”

via Scientific American https://ift.tt/Ly8UAn9

February 16, 2024 at 06:40AM

The Morning After: Want some hybrid meat rice?

https://www.engadget.com/the-morning-after-want-some-hybrid-meat-rice-121549152.html?src=rss

If the image itself isn’t unappetizing enough, the description might put you off. South Korean researchers have made a hybrid rice variant, infused with cow muscle and fat cells, creating a bright pink grain that is one part plant and one part meat. The team hopes to eventually create a cheaper and more sustainable source of protein, with a much lower carbon footprint than actual beef. But please: change the color.

TMA
Yonsei University

The meat cells grow both on the surface of the rice grain and inside of the grain itself. After around ten days, you get the finished product. The study, published in Matter, suggests the rice grains taste like beef sushi, which is made of cow and rice. So yes, that tracks.

— Mat Smith

The biggest stories you might have missed

The best robot vacuums on a budget for 2024

Ayaneo’s NES-inspired mini PC is more than a retro tribute

Marvel’s X-Men ‘97 will pick up from where the 90s animated series left off

??You can get these reports delivered daily direct to your inbox. Subscribe right here!

Bose Ultra Open Earbuds review

Function meets fashion.

TMA
Engadget

Bose’s $299 Ultra Open Earbuds sit outside of your ear canal and clip onto the ridge of your ear to stay in place. Due to the open nature of the design, active noise cancellation (ANC) is moot. Open-type earbuds have become increasingly popular, mostly for the allure of “all day” wear by allowing you to stay in tune with your surroundings, so Bose developed this model that fixes all the issues of its previous design. They seem more of a fashion accessory than a wearable, however.

Continue reading.

Xbox confirms four of its games are coming to more popular consoles

Not Starfield or Indiana Jones, however.

On the latest episode of the Official Xbox Podcast, Microsoft Gaming CEO Phil Spencer said the company is bringing four of its games to "the other consoles." Contrary to previous rumors, Starfield and Indiana Jones and the Great Circle are not coming to PS5 or Switch for now. Reports have suggested that Hi-Fi Rush, Sea of Thieves, Halo and Gears of War may appear on Nintendo and Sony hardware. Both of those consoles have a far larger install base than Xbox Series X/S, which are estimated to have shipped a combined 27 million units, compared with 54.8 million PS5s and nearly 140 million Switches.

Continue reading.

OpenAI’s new model can generate minute-long videos from text prompts

It’s still in testing before being offered to the public.

OpenAI on Thursday announced Sora, a brand new model that generates high-definition videos up to one minute in length from text prompts. Sora, which means “sky” in Japanese, won’t be available to the general public any time soon. Instead, OpenAI is first offering it to a small group of academics and researchers who will assess harm and its potential for misuse. The company said on its website: “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.” Other companies including Meta, Google and Runway, have either teased text-to-video tools or made them available to the public. Still, no other tool can generate videos as long as 60 seconds.

Continue reading.

This article originally appeared on Engadget at https://ift.tt/FIhLZBp

via Engadget http://www.engadget.com

February 16, 2024 at 06:27AM

Lunar lining: Columbia coat tech insulates Intuitive Machines’ newly launched moon lander

http://www.collectspace.com/news/news-021524a-intuitive-machines-im1-columbia-sportswear-insulation.html

A technology used to protect the first astronauts to land on the moon is now on its way back to the lunar surface — and you may already have some of it hanging in your coat closet.

Intuitive Machines’ Nova-C lunar lander "Odysseus" ("Odie" for short) will attempt to become the first U.S.-built commercial spacecraft to land on the moon when it tries for a touchdown near the lunar south pole on Feb. 22. The robotic probe, which lifted off Earth on Thursday (Feb. 15), will rely on its cryogenic propellant supply to power its descent to the surface. Protecting the propulsion tank from the extreme temperatures of outer space is an insulation based on what was first used by NASA’s Apollo missions more than 50 years ago but has since been improved upon by Columbia Sportswear.

Related: SpaceX launches private ‘Odysseus’ lander on pioneering moon mission (video)

"They took the Kapton coating from Apollo and turned it into the reflective discs in their jackets," said Peter McGrath, chief operating officer at Intuitive Machines, in an interview with collectSPACE.com.

The same gold-dotted lining in Columbia’s jackets and other apparel was borne out of the polyamide thermal blankets that were used to cover the Apollo lunar modules.

"We realized immediately that this was a partnership that seemed to make sense," said Haskell Beckham, vice president of innovation at Columbia Sportswear. "Once they described what they were doing and I described to them what we have in our line already for thermal reflective insulation, we realized that our material actually could provide a benefit to their lunar lander."

Intuitive Machines’ Nova-C is one of several new U.S. robotic landers that exist or are in development as a result of NASA’s Commercial Lunar Payload Services (CLPS) initiative. Rather than build its own lander, which as a government project would take longer and cost more, the space agency has turned to companies to land its science instruments on the moon while at the same time flying commercial payloads to make the mission profitable.

The Columbia Sportswear partnership is one such example on IM-1, the first flight of the Nova-C, which is heading for a landing at Malapert A, a crater located about 186 miles (300 kilometers) from the moon’s south pole. Intuitive Machines used Columbia’s Omni-Heat Infinity heat-reflective technology to cover the A2 closeout panel that is shielding the lander’s cryogenic propulsion tank.

"The material that we use and have been using for over a decade in our jackets is very similar to what was what is currently used in the MLI [multi-layer insulation] blankets in the aerospace industry," said Beckham. "We have got a polyester fabric. The little dots are metallized, it’s a multi-layer stack, but it is aluminum. And we have a layer on top of the aluminum, which in this case is a gold pigment."

The same Omni-Heat Infinity technology that is now on its way to the moon on Intuitive Machines’ Nova-C lander can be found lining the latest Columbia Sportswear jackets. (Image credit: Columbia Sportswear)

More than just a commercial off-the-shelf solution, the use of Omni-Heat Infinity on Nova-C is also helping Columbia develop better clothing.

"We have learned so much after working with Intuitive Machines," Beckham said. "So what is in our original jackets is a single layer of reflective insulation lining. Once we were introduced to these MLI or multi-layer insulation blankets, we felt we could probably do that on our jackets, too. So if you look at our latest jackets, there is a lining that reflects your own body heat and there’s also another layer of foil on the shell but the gold is facing in towards the body."

"This is effectively a multi-layer installation. On a per-weight basis, it’s the warmest jacket Columbia’s ever made," he said."

Even though Odysseus has yet to land on the moon, Intuitive Machines and Columbia are already working on incorporating Omni-Heat Infinity into the second Nova-C lunar lander.

Rendering of Intuitive Machines’ IM-1 Nova-C lunar lander adorned with Columbia’s logo on moon’s surface. (Image credit: Columbia Sportswear)

"We are also partnering on the second mission," said Beckham. "We are talking about different materials and different places to put our installation materials on that lander as well."

"We are looking at how we can use the Columbia technology to keep our landers alive during the cold [lunar, 14-Earth-day] night," said McGrath. "We want to be able to isolate avionics systems that we need to keep warm. but we also need to keep them cool when we are using them during the day."

Columbia, which by coincidence shares the same name as the Apollo 11 command module, is promoting the connection between IM-1 and its jackets with a new section on its website. The company is also taking over the exterior of the entertainment arena Sphere in Las Vegas on Feb. 19 to highlight the role that Omni-Heat Infinity technology is playing throughout the history-making mission.

Follow collectSPACE.com on Facebook and on Twitter at @collectSPACE. Copyright 2024 collectSPACE.com. All rights reserved.

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

via Space https://www.space.com

February 15, 2024 at 05:14AM

A Virus Found in Wastewater Beat Back a Woman’s ‘Zombie’ Bacteria

https://www.wired.com/story/phage-therapy-bacteria-zombie-pittsburgh/

For years, a type of bacteria called Enterococcus faecium lurked in Lynn Cole’s bloodstream. Often found in hospitals, E. faecium is usually a gut-dwelling bacteria but can creep into other areas of the body. Her doctors tried various antibiotics, but the bacteria was zombie-like: It kept coming back.

Running out of options after a month-long hospitalization in 2020, Cole and her family agreed to try an experimental treatment called phage therapy. Phages aren’t drugs in the traditional sense. They are tiny, naturally occurring viruses that selectively kill bacteria. Highly specific to the bacteria they attack, phages are showing promise against hard-to-treat infections when antibiotics fail.

Phage therapy is not yet approved in the US, UK, or Western Europe but is used regularly in Georgia, Poland, and Russia. Several clinical trials are underway to confirm its safety and test its efficacy. But to treat Cole, researchers at the University of Pittsburgh School of Medicine first needed to find a phage that would work against her particular bacterial strain.

Phages live in places where bacteria live, which is to say, everywhere. “We have found that a good place to look for phages is in environments where the bacteria you want to target are abundant,” says Daria Van Tyne, assistant professor of infectious diseases at Pitt and an author on a study about Cole’s case that was published today in the journal mBio.

So Van Tyne and her team looked to a source that’s teeming with gut bacteria: wastewater. They screened dozens of phages they had isolated from wastewater samples, but couldn’t find a match. So they reached out to colleagues at the University of Colorado for help.

“The thing about phages is that they’re very much the perfect example of precision medicine, because they are so exquisitely specific to a bacterium,” says Breck Duerkop, an associate professor of immunology and microbiology at the University of Colorado Anschutz School of Medicine and an author on the study.

Phages recognize and attach to certain receptors on the surface of bacteria. After entering a bacterial cell, they make copies of themselves and disrupt the bacteria’s normal function, causing the cell to burst.

Van Tyne’s team mailed a sample of Cole’s bacteria to Duerkop’s lab, which had been studying phages that interact with E. faecium. Duerkop’s group tested the sample against phages they had also fished out of wastewater and found one that they thought would target the bacteria. They sent the phage to Pittsburgh, where Van Tyne and her team prepared it to give to Cole.

Since phages are viruses, they need a host in order to replicate. That means they have to be grown inside cultivated samples of the bacteria they infect. Bacteria grow quickly in the lab, but the phages have to be removed, purified, and then tested to make sure they’re safe for patients to receive. The whole process of making a suitable phage therapy can take weeks or even months from the time a lab gets a request.

via Wired Top Stories https://www.wired.com

February 14, 2024 at 01:09PM

OpenAI Gives ChatGPT a Memory

https://www.wired.com/story/chatgpt-memory-openai/

The promise and peril of the internet has always been a memory greater than our own, a permanent recall of information and events that our brains can’t store. More recently, tech companies have promised that virtual assistants and chatbots could handle some of the mnemonic load, by both remembering and reminding. It’s a vision of the internet as a conversation layer, rather than a repository.

That’s what OpenAI’s latest release is supposed to provide. The company is starting to roll out long-term memory in ChatGPT—a function that maintains a memory of who you are, how you work, and what you like to chat about. Called simply Memory, it’s an AI personalization feature that turbocharges the “custom instructions” tool OpenAI released last July. Using ChatGPT custom instructions, a person could tell the chatbot that they’re a technology journalist based in the Bay Area who enjoys surfing, and the chatbot would consider that information in future responses within that conversation, like a first date who never forgets the details.

Now, ChatGPT’s memory persists across multiple chats. The service will also remember personal details about a ChatGPT user even if they don’t make a custom instruction or tell the chatbot directly to remember something; it just picks up and stores details as conversations roll on. This will work across both the free (ChatGPT 3.5) and paid (ChatGPT 4) version.

In a demo with WIRED ahead of the feature’s release, Joanne Jang, the company’s product lead on model behavior, typed in a few sample queries. In one, Jang asked ChatGPT to write up a social media post for the opening of a cafe called Catio on Valentine’s Day; the bot performed the task. In another post, Jang indicated that she was opening a cafe called Catio on Valentine’s Day. She then navigated to Memory in ChatGPT’s settings; the bot had stored this piece of information about her. Similarly, when Jang asked for a coding tip, but then indicated that she uses Python, ChatGPT recorded in Memory that Jang uses Python exclusively.

These bits of data will be referenced in all of Jang’s future conversations with ChatGPT. Even if she doesn’t reference Catio directly in another chat, ChatGPT would bring it up when relevant.

Courtesy of OpenAI

via Wired Top Stories https://www.wired.com

February 13, 2024 at 12:21PM