‘The Risk of Extinction:’ AI Leaders Agree on One-Sentence Warning About Technology’s Future

https://gizmodo.com/ai-chatgpt-extinction-warning-letter-openai-sam-altman-1850486688


Over 350 AI executives, researchers, and industry leaders signed a one-sentence warning released Tuesday, saying that we should try to stop their technology from destroying the world.

ChatGPT’s Creator Buddies Up to Congress | Future Tech

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement, released by the Center for AI Safety. The signatories including Sam Altman, the CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, Dario Amodei, CEO of Anthropic, and Geoffrey Hinton, the so called “Godfather of AI” who recently quit Google over fears about his life’s work.

As the public conversation about AI shifted from awestruck to dystopian over the last year, a growing number of advocates, lawmakers, and even AI executives united around a single message: AI could destroy the world and we should do something about it. What that something should be, specifically, is entirely unsettled, and there’s little consensus about the nature or likelihood of these existential risks.

There’s no question that AI is poised to flood the world with misinformation, and a large number of jobs will likely be automated into oblivion. The question is just how far these problems will go, and when or if they will dismantle the order of our society.

Usually, tech executives tell you not to worry about the threats of their work, but the AI business is taking the opposite tactic. OpenAI’s Sam Altman testified before the Senate Judiciary Committee this month, calling on Congress to establish an AI regulatory agency. The company published a blogpost arguing that companies should need a license if they want to work on AI “super intelligence.” Altman and the heads of Anthropic and Google DeepMind recently met with President Biden at the White House for a chat about AI regulation.

Things break down when it comes to specifics though, which explains the length of Tuesday’s statement. Dan Hendrycks, executive director of the Center for AI Safety, told the New York Times they kept it short because experts don’t agree on the details about the risks, or what, exactly, should be done to address them. “We didn’t want to push for a very large menu of 30 potential interventions,” Hendrycks said. “When that happens, it dilutes the message.”

It may seem strange that AI companies would call on the government to regulate them, which would ostensibly get in their way. It’s possible that unlike the leaders of other tech businesses, AI executives really care about society. There are plenty of reasons to think this is all a bit more cynical than it seems, however. In many respects, light-touch rules would be good for business. This isn’t new: some of the biggest advocates for a national privacy law, for example, include Google, Meta, and Microsoft.

For one, regulation gives businesses an excuse when critics start making a fuss. That’s something we see in the oil and gas industry, where companies essentially throw up their hands and say “Well, we’re complying with the law. What more do you want?” Suddenly the problem is incompetent regulators, not the poor corporations.

Regulation also makes it far more expensive to operate, which can be a benefit to established companies when it hampers smaller upstarts that could otherwise be competitive. That’s especially relevant in the AI businesses, where it’s still anybody’s game and smaller developers could pose a threat to the big boys. With the right kind of regulation, companies like OpenAI and Google could essentially pull up the ladder behind them. On top of all that, weak nationwide laws get in the way of pesky state lawmakers, who often push harder on the tech business.

And let’s not forget that the regulation that AI businessmen are calling for is about hypothetical problems that might happen later, not real problems that are happening now. Tools like ChatGPT make up lies, they have baked-in racism, and they’re already helping companies eliminate jobs. In OpenAI’s calls to regulate super intelligence — a technology that does not exist — the company makes a single, hand-waving reference to the actual issues we’re already facing, “We must mitigate the risks of today’s AI technology too.”

So far though, OpenAI doesn’t actually seem to like it when people try to mitigate those risks. The European Union took steps to do something about these problems, proposing special rules for AI systems in “high-risk” areas like elections and healthcare, and Altman threatened to pull his company out of EU operations altogether. He later walked the statement back and said OpenAI has no plans to leave Europe, at least not yet.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

via Gizmodo https://gizmodo.com

May 30, 2023 at 10:49AM

We’ve Got Power: Dream Chaser Spaceplane Aces Critical Test as Epic Maiden Mission Draws Near

https://gizmodo.com/we-ve-got-power-dream-chaser-spaceplane-aces-critical-1850495336


Sierra Space fired up its spaceplane in its assembly facility for the first time, signifying that the Dream Chaser shuttle could soon be ready for its first mission to low Earth orbit.

Astronomers Could Soon Get Warnings When SpaceX Satellites Threaten Their View

The Colorado-based company announced on Wednesday that it had successfully completed the first power-up of its Dream Chaser spaceplane. During the test, engineers simulated the power that would otherwise be generated by Dream Chaser’s solar arrays once the spaceplane is in orbit and its systems are turned on.

“This is a milestone that points to the future and is a key moment in a long journey for Dream Chaser.” Tom Vice, CEO of Sierra Space, said in the company’s statement. “With this significant achievement, our Dream Chaser spaceplane is poised to redefine commercial space travel, opening up new possibilities for scientific research, technological advancements, and economic opportunities in space.”

The Dream Chaser is an orbital spaceplane designed to fly to low Earth orbit, carrying crew and cargo to orbital pitstops such as the International Space Station (ISS). The futuristic-looking shuttle is designed to carry up to 12,000 pounds (5,443 kilograms) of cargo. As Dream Chaser cannot fly to space on its own, a big rocket, namely ULA’s Vulcan Centaur, is required to deliver the craft to low Earth orbit. Like NASA’s Space Shuttle, however, Dream Chaser is designed to survive atmospheric reentry and perform runway landings on the surface.

Sierra Space is targeting the end of 2023 for Dream Chaser’s first flight from Kennedy Space Center in Florida. This flight is under a supply mission contract with NASA to send cargo to the ISS. The company also wants to launch crewed Dream Chaser missions to its own space station, Orbital Reef, a collaboration with Jeff Bezos’ Blue Origin.

The company is preparing to ship the first Dream Chaser, named Tenacity, to NASA’s Neil Armstrong Test Facility in Ohio for testing ahead of its inaugural flight, SpaceNews reported. The exact date of its launch, however, has yet to be disclosed. The spaceplane is set to launch on board the second mission for ULA’s Centaur, but the rocket’s first mission has been repeatedly postponed.

For more spaceflight in your life, follow us on Twitter and bookmark Gizmodo’s dedicated Spaceflight Spaceflight page.

via Gizmodo https://gizmodo.com

June 1, 2023 at 11:19AM

Please Don’t Make These 7 Nasty ChatGPT-Generated Vegan Recipes

https://gizmodo.com/chatgpt-ai-recipes-food-7-bad-vegan-recipes-1850496776


 If the artificial intelligence hypebeasts are to be believed, ChatGPT-styled large language models are on the cusp of radically altering the global workforce, gutting entire industries, and, quite possibly, eliminating life on earth as we know it. Just don’t ask it to prepare you a basic meal or a palatable cocktail.

Chefs and taste testers at World Of Vegan learned this lesson the hard way after testing more than 100 different spring recipes, date night dishes, and dessert ideas created by OpenAI’s ChatGPT. The results were “hilariously pitiful,” one chef said. Out of the dozens of recipes churned out, only one dish, cauliflower tacos, were deemed “successful” by World of Vegan’s team of chefs.

Many of the other failed dishes sound fine at first glance, but after a quick analysis, the chefs found numerous examples of situations where ingredients clashed or where cooked dishes wound up a soggy, nasty mess. A simple spring veggie wrap generated by ChatGPT, for example, ended up looking more like last night’s dinner after a brutal hangover. Oven-baked chocolate cupcakes, meanwhile, somehow looked wound up looking like a deep crater caused an errant bombing run.

The sudden rise of advanced large language models to the mainstream in recent years has revealed how, in many areas, AI and software are outpacing robotics. Ironically, that doesn’t seem to be the case in food service. Rudimentary robots can already flip burgers and help assemble more complicated meals in the physical world while ChatGPT fails to put together beginner-level dishes.

“I imagine specially-programmed robots can cook up chef-created, Michelin Star-worthy meals, no problem,” World of Vegan Founder & CEO Michelle Cehn told Gizmodo. “However, when it comes to generating sensible—let alone innovative and delectable—recipes, the technology proved to be too immature in its development to consistently achieve our desired results.”

Chatbots aren’t entirely useless when it comes to food. The model can, in a pinch, help brainstorm a dish if you give it a handful of ingredients you happen to have on hand since the model is essentially predicting the most likely combinations of those individual elements. On the other hand, AI’s tendency to simply make shit up seemingly out of whole cloth could leave hungry users sorely disappointed. Oh, and don’t expect it to properly fill out your grocery list either.

“It may be okay, and even entertaining, to use ChatGPT to help whip up a recipe,” World of Vegan Blog assistant and taste tester Erin Wysocars said. “[But] be warned that it may still flop and those ingredients you wanted to save may turn into garbage anyway. May your stomach rumble and empty fridge serve as the lesson here!”

“For recipes, ChatGPT is a tool, but should not be the tool for finding human-generated and tested recipes,” Wysocars added.

Keep reading to see some of ChatGPT’s terrible concoctions. Recipes for each dish are included, should you dare tempt fate.

via Gizmodo https://gizmodo.com

June 2, 2023 at 06:07AM

YouTube’s recommendations are leading kids to gun videos, report says

https://www.engadget.com/youtubes-recommendations-are-leading-kids-to-gun-videos-report-says-231207580.html?src=rss

YouTube’s recommendations are leading young kids to videos about school shootings and other gun-related content, according to a new report. According to the Tech Transparency Project (TTP), a nonprofit watchdog group, YouTube’s recommendation algorithm is “pushing boys interested in video games to scenes of school shootings, instructions on how to use and modify weapons” and other gun-centric content. 

The researchers behind the report set up four new YouTube accounts posing as two 9-year-old boys and two 14-year-old boys. All accounts watched playlists of content about popular video games, like Roblox, Lego Star Wars, Halo and Grand Theft Auto. The researchers then tracked the accounts’ recommendations during a 30-day period last November.

“The study found that YouTube pushed content on shootings and weapons to all of the gamer accounts, but at a much higher volume to the users who clicked on the YouTube-recommended videos,” the TTP writes. “These videos included scenes depicting school shootings and other mass shooting events; graphic demonstrations of how much damage guns can inflict on a human body; and how-to guides for converting a handgun to a fully automatic weapon.”

As the report notes, several of the recommended videos appeared to violate YouTube’s own policies. Recommendations included videos of a young girl firing a gun and tutorials on converting handguns into “fully automatic” weapons and other modifications. Some of these videos were also monetized with ads.

In a statement, a YouTube spokesperson pointed to the YouTube Kids app and its in-app supervision tools, which “create a safer experience for tweens and teens” on its platform.

“We welcome research on our recommendations, and we’re exploring more ways to bring in academic researchers to study our systems,” the spokesperson said. “But in reviewing this report’s methodology, it’s difficult for us to draw strong conclusions. For example, the study doesn’t provide context of how many overall videos were recommended to the test accounts, and also doesn’t give insight into how the test accounts were set up, including whether YouTube’s Supervised Experiences tools were applied.”

The TTP report is far from the first time researchers have raised questions about YouTube’s recommendation algorithm. The company has also spent years working to reduce so-called “borderline” content — videos that don’t break its rules outright but may otherwise be unsuitable for mass distribution — from appearing in recommendations. And last year, the company said it was considering disabling sharing altogether on some such content.

This article originally appeared on Engadget at https://ift.tt/BDpIVn1

via Engadget http://www.engadget.com

May 16, 2023 at 06:24PM

Google’s Pixel 8 Pro could have a built-in thermometer

https://www.engadget.com/googles-pixel-8-pro-could-have-a-built-in-thermometer-114808668.html?src=rss

Google’s Pixel 8 Pro could come with a new feature that’s not quite commonly found on phones. 91mobiles has published a video from tipster Kuba Wojciechowski showing what looks like Pixel device being used to measure a person’s temperature. Yep, if the leak is legit, the upcoming flagship Pixel will have a built-in thermometer. The video shows an infrared sensor similar to the ones used by contactless thermometers inside the metal panel where the rear cameras are also located. 

Based on the demonstration of the built-in thermometer, users will have to take off their glasses or any other eye and forehead accessories. They then have to bring the sensor as close to their forehead as possible without actually touching it and then moving their phone towards their temple in 5 seconds. 91mobiles says the sensor could also be used to measure the temperature of inanimate objects, but the video didn’t demonstrate how that would work. Google’s employees have reportedly been testing the feature, as well. 

A previous leak of computer renders show the Pixel 8 Pro as a rounded version of the Pixel 7, and this new video does show a device that’s identical to those renders. While the upcoming phone bears a lot of physical similarities to its predecessor, its three rear cameras are now inside one module. On the Pixel 7 Pro, one of the three camera sensors is in a separate module. 

A thermometer is perhaps a curious feature addition for a phone, especially now that pandemic-related measures are no longer followed. Take note that this is merely a leak, and it remains to be seen whether the Pixel 8 Pros that will make their way to buyers will actually have the sensor.

91mobiles‘ video has already been deleted due to a copyright claim, but one of the publication’s readers tweeted a copy that we’ve embedded below.

This article originally appeared on Engadget at https://ift.tt/ocOaLxR

via Engadget http://www.engadget.com

May 19, 2023 at 07:00AM

Meta reportedly wants to license Magic Leap’s AR technology

https://www.engadget.com/meta-reportedly-wants-to-license-magic-leaps-ar-technology-213923148.html?src=rss

Meta could turn to Magic Leap for help to stay ahead of Apple and other new entrants in the soon-to-be crowded AR space. According to the Financial Times, the two companies are in talks to sign a multi-year IP licensing and manufacturing pact. Details on the negotiations are few, but according to the outlet’s sources, a potential partnership is not expected to produce a jointly developed headset. Instead, a deal could see Magic Leap provide Meta with access to some of its optical tech. The partnership could also see the startup assist with manufacturing Meta devices, thereby allowing the tech giant to produce more of its VR headsets domestically at a time when there’s more pressure for US companies to lessen their dependence on China.

Meta did not immediately respond to Engadget’s comment request. Magic Leap told the Financial Times partnerships were becoming a “significant line of business and growing opportunity for Magic Leap.” Additionally, in a blog post titled “What’s Next for Magic Leap,” CEO Peggy Johnson said late last year the company had “received an incredible amount of interest from across the industry to license our IP and utilize our patented manufacturing process to produce optics for others seeking to launch their own mixed-reality technology.”

The timing of the report is notable for a couple of reasons. Meta is under pressure from investors to show something for all the money it has spent pursuing CEO Mark Zuckerberg’s vision for the future of computing. The company does not expect to make a profit from all of its metaverse projects for another few years. At the same time, it is burning about $10 billion annually on its Reality Labs division. Separately, Apple is widely expected to enter the AR headset market next month when the company holds its WWDC developer conference.

This article originally appeared on Engadget at https://ift.tt/O5jZsRn

via Engadget http://www.engadget.com

May 21, 2023 at 04:47PM

Microsoft is helping developers build their own ChatGPT-compatible AI copilots

https://www.engadget.com/microsoft-is-helping-developers-build-their-own-chatgpt-compatible-ai-copilots-150029815.html

Microsoft has a lot of news at this year’s Build conference around its AI "copilots" for Windows 11 and other products, but it wants third-party developers in on the action too. The company announced that it has expanded its AI plugin ecosystem and provided a framework for building AI apps and copilots. At the same time, it’s adopting the same open plugin standard that OpenAI uses for ChatGPT to ensure it’ll work alongside its Windows 11, 365 and other copilots. 

Microsoft introduced the idea of copilots nearly two years ago. Those are applications that use AI and LLMs (large language models) to help users with complex cognitive tasks like writing sales pitches, generating images and more. For example, ChatGPT on Bing is actually a copilot, and Microsoft has also launched copilots for Microsoft 365 and Microsoft Security, among others. 

Now, it’s adding features that let developers build their own using new "plugins" that allow copilots to interact with other software and services. "You may look at Bing Chat and think this is some super magical complicated thing, but Microsoft is giving developers everything they need to get started to go build a copilot of their own," said Microsoft CTO Kevin Scott. "I think over the coming years, this will become an expectation for how all software works."

Microsoft wants developers to build their own AI Co-pilots and plugins
Microsoft

In addition, Microsoft said it’s adopting the same open plugin standard used by Open AI so that all of Microsoft’s copilots can potentially work with ChatGPT. "That means developers can now use one platform to build plugins that work across both business and consumer surfaces, including ChatGPT, Bing, Dynamics 365 Copilot, Microsoft 365 Copilot and Windows Copilot," it wrote. 

As part of that platform, Bing is adding plugin support for third-party companies including Instacart, Kayak, Klarna, Redfin and Zillow. That’s on top of those previously announced by Open AI including OpenTable and Wolfram. Developers can also extend Microsoft 365 Copilot using ChatGPT and Bing plugins, as well as Teams message extensions and Power Platform connectors. Developers will also be able to build their own plugins with the Microsoft Teams Toolkit for Visual Studio Code and Visual Studio. 

Finally, Microsoft announced that Azure AI Content Safety is now in preview. It’s designed to ensure copilots avoid creating outputs that are "biased, sexist, racist, hateful, violent" or encourage self-harm, said Microsoft product manager Sarah Bird. The models detect inappropriate content across images and text, then flag them and assign severity scores so that human moderators can see anything that requires urgent action. "It’s part of the safety system that’s powering the new Bing… [and] we’re now launching it as a product that third-party customers can use," said Bird. 

This article originally appeared on Engadget at https://ift.tt/lVDw3OG

via Engadget http://www.engadget.com

May 23, 2023 at 10:15AM