SpaceX’s Starship Rocket Reached Record Heights before It Was Lost

https://www.space.com/spacex-starship-third-test-flight-launch

SpaceX’s Starship Rocket Reached Record Heights before It Was Lost

SpaceX lost both the booster and vehicle in a test launch of its massive Starship rocket. But the third try was the charm for Starship, which smoothly separated in its most successful flight to date

By Josh Dinner & SPACE.com

SpaceX Starship spacecraft lifts off.

The SpaceX Starship spacecraft lifts off from Starbase in Boca Chica, Texas, on March 14, 2024.

Credit:

Chandan Khanna/AFP via Getty Images

SOUTH PADRE ISLAND, Texas — SpaceX’s Starship megarocket, the world’s largest and most powerful rocket, reached orbital speed for the first time Thursday in a historic third test flight from South Texas.

Hundreds of Spring Break spectators, rocket launch chasers and SpaceX fans gathered along the southern shores of South Padre Island and surrounding areas to witness the third test flight of the biggest rocket ever built. About 5 miles (8 kilometers) south of the crowds, SpaceX’s massive Starship vehicle lifted off this morning (March 14) at 9:25 a.m. EDT (1325 GMT) from the company’s manufacturing and test launch facilities near Boca Chica Beach.

"Starship reached orbital velocity," SpaceX founder Elon Musk announced on X (formerly Twitter) after liftoff. "Congratulations SpaceX team!!" The launch occurred on the 22nd anniversary of SpaceX’s founding in 2002, the company said.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Neither the Starship vehicle nor its Super Heavy booster survived all the way through to their intended splashdown, but SpaceX officials said the test flight achieved several of its key goals during the flight.

Cheers erupted from the South Padre crowd as the dim morning sky was illuminated by the ignition of Starship’s 33 first-stage Raptor engines, which quickly shrouded nearly the entire vehicle in a plume of dust and smoke. Seconds later, the 400-foot tall (122 meters) rocket rose from the plume, quickly increasing its climb skyward.

"This flight pretty much just started, but we’re farther than we’ve ever been before," SpaceX spokesperson Dan Huot said just after liftoff in a livestream. "We’ve got a starship, not just in space, but on its coast phase into space."

Today’s launch, designated Integrated Flight Test-3 (IFT-3), was the third test mission for the fully stacked Starship. The first and second Starship launches both ended explosively last year, with the vehicles detonating before the completion of each flight’s mission objectives. However, data collected during those first flights helped SpaceX engineers get Starship ready for success down the road.

Improvements made between IFT-1 and IFT-2 last year included the implementation of a "hot staging" technique, in which the upper stage engines begin firing before Starship’s first-stage booster, known as Super Heavy, fully separates. IFT-2’s hot staging maneuver was a success, as it was today as well.

High in the sky, Starship’s two stages separated about 2 minutes 45 seconds after liftoff, sending the 165-foot-tall (50 m) upper-stage spacecraft onward to space while Super Heavy began preparations for a boostback burn to redirect its trajectory. That post-staging burn reversed Super Heavy’s velocity, and was intended to be followed minutes later by a landing burn above the Gulf of Mexico. However, it appears the Super Heavy’s engines did not relight as planned, leading to the loss of the booster.

"It didn’t light all the engines that we expected and we did lose the booster," Huot said. "We’ll have to go through the data to figure out exactly what happened, obviously."

Starship is designed to be fully reusable, and SpaceX plans to land and relaunch its Super Heavy boosters, as it does with its Falcon 9 rockets. In the future, two "chopstick" arms on Starship’s launch tower will catch the Super Heavy booster as it returns for landing, but IFT-3’s Super Heavy was always expected to splash down in the Gulf.

Starship’s upper stage continued flying after separation, but didn’t attempt to go into a full orbit. Instead, the spacecraft entered a suborbital coast phase as it soared above Earth, during which SpaceX hoped to demonstrate two of the spacecraft’s flight systems toward vehicle qualification — the reignition of Starship’s Raptor engines and the transfer of cryogenic fuel between tanks. Following these demonstrations, the spacecraft was expected to splash down in the Indian Ocean about 65 minutes after launch, but SpaceX lost contact with the Ship during reentry.

"We are making the call now that we have lost Ship 28," Huot said, referring to the Starship vehicle number, after an extended period without telemetry of contact with the vehicle. "We haven’t heard from the ship up until this point and so the team has made the call that Ship has been lost. So, no splashdown today."

Rapid progress is needed for Starship, which is on the critical path for NASA’s Artemis 3 mission. Artemis 3 aims to land the first humans on the moon since the end of the Apollo era in the early 1970s. Artemis 3 is currently scheduled for 2026, giving Starship less than two years to meet NASA vehicle qualifications for landing astronauts on the lunar surface.

Copyright 2024 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

via Scientific American https://ift.tt/tpJ1N2L

March 14, 2024 at 01:34PM

OpenAI’s Figure 01 Humanoid Robot Will AMAZE and TERRIFY You

https://www.geeksaresexy.net/2024/03/14/openais-figure-01-humanoid-robot-will-amaze-and-terrify-you/

Hold onto your seats, geeks, because the future just got a lot closer—and it’s brought a humanoid robot named Figure 01 along for the ride. In a jaw-dropping display of artificial intelligence and robotics, Figure, in collaboration with OpenAI, has unveiled a video that’ll leave you equal parts amazed and slightly unnerved.

Picture this: a human engaging in a natural conversation with Figure 01, prompting the robot to identify objects in its surroundings with eerie precision. From a red apple on a plate to dishes in a drying rack, Figure 01’s responses are not only accurate but delivered in a remarkably human-like tone.

And it gets even more mind-blowing. When asked for something to eat, Figure 01 effortlessly hands over the apple, showcasing not just intelligence but a surprising level of dexterity. And when tasked with cleaning up a mess, the robot not only complies but offers explanations for its actions.

Perhaps the most chilling moment comes when Figure 01 predicts the next move based on the arrangement of objects on the table, demonstrating a level of foresight that’s nothing short of astonishing.

But with great advancements come great existential questions—watch the video and see for yourself. It’s a glimpse into a future that’s simultaneously thrilling and unnerving.

Click This Link for the Full Post > OpenAI’s Figure 01 Humanoid Robot Will AMAZE and TERRIFY You

via [Geeks Are Sexy] Technology News https://ift.tt/JipKSuk

March 14, 2024 at 09:21AM

An AI that can play Goat Simulator is a step towards more useful AI

https://www.technologyreview.com/2024/03/13/1089764/an-ai-that-can-play-goat-simulator-is-a-step-towards-more-useful-ai/

Fly, goat, fly! A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. Researchers were able to get it to follow text commands to play seven different games and move around in three different 3D research environments. It’s a step towards more generalized AI that can transfer skills across multiple environments.  

Google Deepmind has had huge success developing game-playing AI systems. Its system AlphaGo, which beat top professional player Lee Sedol at the game Go in 2016, was a major milestone that showed the power of deep learning. But unlike earlier game-playing AI systems, which mastered only one game, or could only follow single goals or commands, this new agent is able to play in a variety of different games, including Valheim and No Man’s Sky.

Training AI systems in games is a good proxy for real-world tasks. “A general game-playing agent could, in principle, learn a lot more about how to navigate our world than anything in a single environment ever could,” says Michael Bernstein, an associate professor of computer science at Stanford University, who was not part of the research. 

“One could imagine one day rather than having superhuman agents which you play against, we could have agents like SIMA playing alongside you in games with you and with your friends,” says Tim Harley, a research engineer at Google DeepMind who was part of the team that developed the agent, called SIMA (Scalable, Instructable, Multiworld Agent). 

The Google DeepMind team trained SIMA on lots of examples of humans playing video games, both individually and collaboratively, alongside keyboard and mouse input and annotations of what the players did in the game, says Frederic Besse, a research engineer at Google DeepMind.  

They then used an AI technique called imitation learning to teach the agent to play games as humans would. SIMA can follow 600 basic instructions, such as “turn left,” “climb the ladder” and “open the map” which can be completed in less than approximately 10 seconds.

The team found that a SIMA agent that was trained on many games was better than an agent that learned how to play just one. This is because it was able to take advantage of the shared concepts between games to learn better skills and to be better at carrying out instructions, says Besse. 

“This is again a really exciting key property as we have an agent that can play games it has never seen before essentially,” says Besse. 

Seeing this sort of knowledge transfer between games is a significant milestone for AI research, says Paulo Rauber, a lecturer in artificial Intelligence at Queen Mary University of London. 

The basic idea of learning to execute instructions based on examples provided by humans could lead to more powerful systems in the future, especially with bigger datasets, Rauber says. SIMA’s relatively limited dataset is what is holding back its performance, he says. 

Although the number of game environments it’s been trained on is still small, SIMA is on the right track for scaling up, says Jim Fan, a senior research scientist at NVIDIA who runs its  AI Agents Initiative. 

But the AI system is still not close to human-level, says Harley from Google DeepMind. For example, in the game No Man’s Sky, the AI agent could do just 60% of the tasks humans could do. And when the researchers removed the ability for humans to give SIMA instructions, they found the agent performed much worse than before. 

Next, Besse says the team is working on improving the agent’s performance. They want to get the AI system to work in as many environments as possible and learn new skills, and allow people to be able to chat and get a response from the AI agent. The team also wants SIMA to have more generalized skills, allowing it to pick up games it has never seen before quickly, much like a human can. 

Humans “can generalize very well to unseen environments and unseen situations. And we want our agents to be just the same,” says Besse. 

SIMA inches us closer towards a “ChatGPT moment” for autonomous agents, says Roy Fox, an assistant professor at the University of California, Irvine.  

But this is a long way away from actual autonomous AI. That would be “a whole different ball game,” he says. 

via Technology Review Feed – Tech Review Top Stories https://ift.tt/tRr0pQX

March 13, 2024 at 09:17AM

Scientists Say They Can Fix Your Internet Connection With 3D Wi-Fi

https://gizmodo.com/scientists-say-they-can-fix-your-internet-connection-wi-1851326379

Photo: Tetra Images (Getty Images)

You’ve probably noticed that your wi-fi slows down when more people or devices use the network. The same goes for larger systems. If you get too many people congregating in one area, the cell phone towers can’t handle the influx. With the number of connected devices growing exponentially and the coming wave of AI poised to make the problem even worse, there are major wireless traffic jams on the horizon. Now, scientists at the University of Florida have a potential solution: just make the chips 3D.

The M3 MacBook Airs are as Good as Ever, and That’s the Problem

Most wireless communication relies on “planar” processors, meaning they’re essentially flat. Because they’re two-dimensional, they can only handle a limited range of frequencies at a given time. But unlocking a manufacturing process that lets you build chips in three dimensions could let hardware handle multiple frequencies at the same time. That could amount to a revolution.

A schematic of the new 3D circuit design.
Illustration: Roozbeh Tabrizian

You can compare the problem to traffic moving through a city, according to Roozbeh Tabrizian, an associate professor of electrical and computer engineering at the University of Florida whose team developed the new processors.

“A city’s infrastructure can only handle a certain level of traffic, and if you keep increasing the volume of cars, you have a problem,” Tabrizian said in a press release. “We’re starting to reach the maximum amount of data we can move efficiently. The planar structure of processors is no longer practical as they limit us to a very limited span of frequencies.”

The research, published in the journal Nature Electronics, describes a new approach that harnesses semiconductor technology to house multiple processors built for different frequencies in a single chip. That breakthrough has several benefits. Above all else, it increases performance while shrinking the amount of space that chips take up. Planar chips can only get bigger if you make them wider, but the ability to make chips that increase their capacity in three dimensions instead of two means the technology is much easier to scale.

“Think of it like lights on the road and in the air,” Tabrizian said. “It becomes a mess. One chip manufactured for just one frequency doesn’t make sense anymore.”

As this technology matures, it will mean all of our devices can work better and faster. That’s a crucial development as we charge ahead with everything from smart cities to adding another 12 smart devices to your apartment.

via Gizmodo https://gizmodo.com

March 11, 2024 at 03:45PM

Solar-Powered Farming Is Quickly Depleting the World’s Groundwater Supply

https://www.wired.com/story/solar-energy-farming-depleting-worlds-groundwater-india/

This story originally appeared on Yale Environment 360 and is part of the Climate Desk collaboration.

There is a solar-powered revolution going on in the fields of India. By 2026, more than 3 million farmers will be raising irrigation water from beneath their fields using solar-powered pumps. With effectively free water available in almost unlimited quantities to grow their crops, their lives could be transformed. Until the water runs out.

The desert state of Rajasthan is the Indian pioneer and has more solar pumps than any other. Over the past decade, the government has given subsidized solar pumps to almost 100,000 farmers. Those pumps now water more than a million acres and have enabled agricultural water use to increase by more than a quarter. But as a result, water tables are falling rapidly. There is little rain to replace the water being pumped to the surface. In places, the underground rocks are now dry down to 400 feet below ground.

That is the effective extraction limit of the pumps, many of which now lie abandoned. To keep up, in what amounts to a race to the bottom of the diminishing reserves, richer farmers have been buying more powerful solar pumps, leaving the others high and dry or forcing them to buy water from their rich neighbors.

Water wipeout looms. And not just in Rajasthan.

Solar pumps are spreading rapidly among rural communities in many water-starved regions across India, Africa, and elsewhere. These devices can tap underground water all day long at no charge, without government scrutiny.

For now, they can be great news for farmers, with the potential to transform agriculture and improve food security. The pumps can supply water throughout the daylight hours, extending their croplands into deserts, ending their reliance on unpredictable rains, and sometimes replacing existing costly-to-operate diesel or grid-powered pumps.

But this solar-powered hydrological revolution is emptying already-stressed underground water reserves—also known as groundwaters or aquifers. The very success of solar pumps is “threatening the viability of many aquifers already at risk of running dry,” Soumya Balasubramanya, an economist at the World Bank with extensive experience of water policy, warned in January.

An innovation that initially looked capable of reducing fossil-fuel consumption while also helping farmers prosper is rapidly turning into an environmental time bomb.

Solar panels power pumping at a farm near Kafr el-Dawwar, Egypt.Photograph: KHALED DESOUKI/Getty Images

via Wired Top Stories https://www.wired.com

March 9, 2024 at 07:12AM

Fooocus is the easiest way to create AI art on your PC

https://www.pcworld.com/article/2253285/fooocus-is-the-easiest-way-to-run-ai-art-on-your-pc.html

What’s the simplest way to create AI art on your PC? An application called Fooocus. Although Stable Diffusion is often seen as the best way to create AI art on your PC, Fooocus offers a simple setup experience, with rewarding depth for those who wish to dive deeper.

Stable Diffusion debuted two years ago as the way to create AI art on your PC. While I’ve used some of the techniques that David Wolski outlined in his tutorial on using Stable Diffusion, it just feels so complicated to set up. Fooocus (yes, three “o’s) offers essentially a one-click setup process in the same vein as something like winget: You tell it what to do, and then Fooocus goes out and does it. It’s an absolutely free app that runs on Windows, with no hidden costs. You will need a pretty powerful PC to run it, though.

Just a reminder: There are many ways of running AI art, and yes, many of the established ways are quite good. Both Google Bard and Microsoft Copilot (previously Bing Image Creator) will generate AI art for you while running in the cloud, and both offer detailed creations that you can download and use, too.

you want a gpu to run local ai art

GeForce RTX 4070

GeForce RTX 4070

Price When Reviewed:


$599

Running localized AI art, however, can be almost as fast with the right hardware, and the images are arguably just as good or better. You’ll also have more freedom to choose the subject matter, and you can resize them, edit them, or use other images as source art. And, of course, it’s all free. Fooocus also takes its cues from Midjourney, long recognized as a pioneer in premium AI art: Instead of literally taking your instructions and turning them into AI art, it makes some behind-the-scenes guesses about what you’ll like and optimizes its own requests accordingly.

If you’re a gamer, or just have a powerful PC, it’s worth giving Fooocus a try. There are no specific hardware requirements, but we’d make sure you have a few dozen gigabytes of spare storage space on your SSD, and a discrete GPU (Nvidia preferred, but not necessary) is almost a must.

How to download and set up Fooocus

Fooocus is open source, and its code may be found on GitHub where the code has been probed and prodded. This Fooocus download link will actually bring you to developer Illyasviel’s Fooocus GitHub page, while the real download link can be found by scrolling down Illyasviel’s page. It leads to a 1.8GB .7z file. (If you don’t want to run Fooocus from your Downloads folder, move it somewhere else on your PC.)

Fooocus file directory website ai art
Our download link won’t take you to the multi-gigabyte download itself. The page’s download link looks like this, midway down the page.

Mark Hachman / IDG

Normally, the .7z file format would imply that you have to unzip with 7Zip — which your PC will do, but only after you click the “Run.bat” batch file. This extracts about 5.5GB of data, which appears like it will take a long time to decompress but took my system about 10 minutes.

Fooocus is a little weird in that you can click the “run” batch file, and it will set everything up, with an emphasis on generic models. But you can also come back to it in the future and click the “run_anime” batch file and it will set up an alternate configuration that’s more optimized for anime. You can do the same for “run_realistic,” too.

Fooocus file directory

Mark Hachman / IDG

When you do, however, chances are that you’ll see a Windows Smartscreen warning. Fooocus isn’t a well-known application that Microsoft Windows has seen much of, so you’ll need to manually approve the download.

Foocus smartscreen

Mark Hachman / IDG

If you do, Foooocus downloads all of the software infrastructure it will need to run, which will require yet another few more gigabytes and a few more minutes to download. It will download them from HuggingFace.com, the internet’s repository for AI applications.

You’ll see this Command Line screen while it does so.

Fooocus ai art download

Mark Hachman / IDG

Very soon, however, you’ll see the Fooocus interface, which will launch inside your web browser. This isn’t unusual for AI applications. But there’s a lot of white space, which we’ll try to briefly explain. But, basically, you’re done.

How to use Foocus: Styles and prompts

Once you do see the Fooocus web interface below, you’re in business. There’s a prompt box at the bottom of the screen, where you can decide what the scene should have in it. Clicking “Generate” kicks off the generation process and creates your art. Absolutely feel free to click the tiny “Advanced” checkbox at the very bottom of the screen! This opens up a wealth of stylistic options, which many of my examples have enabled.

Fooocus ai art basic UI

Mark Hachman / IDG

A prompt like “a cat” will work, of course, though that’s nothing you haven’t seen before. “A cat wearing a pirate hat” adds some variety. “A cat wearing a pirate hat at a burger restaurant” is even more creative.

If you’d like, you can specify the style you’d like in the prompt, such as “sinister” or “epic.” This is open to interpretation, of course. Fooocus prefers a rather photographic style by default.

Fooocus astronaut in produce aisle ai art

Mark Hachman / IDG

If you do have a GPU in your system, Fooocus will automatically load itself onto it if it can, speeding up the process considerably.

When you click “Generate,” Fooocus will step through multiple iterations of the image (30, by default), refining and enhancing with each step. I’ve run Fooocus on a pair of systems (a 13th-gen Core desktop, with a GeForce RTX 3090; and a 14th-gen Intel Core HX laptop, with an RTX 4090) and the images took about 10 seconds or less to generate on the default “Speed” setting. You can choose either “Quality” or “Extreme Speed” to adjust the iterative steps, but it’s really not necessary.

You can get crazy with prompt generation, but there are limits: “A cat walking on the rings of Saturn” didn’t give me a recognizable result. But it’s all about experimentation. And yes, Fooocus is trained on celebrities and public figures, and it won’t offer too many limits on NSFW material. If you want to imagine Donald Trump and Joe Biden kissing each other, well, yes, you can. And aside from some AI weirdness where it doesn’t really understand lips, it looks pretty realistic.

Fooocus ai art goddess of cheese
Feel free to get weird. Is there a “goddess of cheese”? Now there is.

Mark Hachman / IDG

By default, Fooocus creates a pair of images based upon your prompt, and will also store them in folders, organized by day, inside the Fooocus directory.

Again, one of the strengths of Fooocus is that it does some behind-the-scenes work to make your generated images look great without requiring you to enter specifics such as the depth of field, artistic influences, and so on. But the “Advanced” checkbox does allow you to adjust the proportions of the generated image as part of the “Setting” tab. You can also issue “negative” prompts, too: Perhaps you want Fooocus to draw you a plate of spaghetti. Adding “meatballs” to the “negative” prompt box will ensure that detail isn’t added.

Fooocus ai art cat and burger pirate hatFooocus beetles having a tea party
Here, both the Setting and Style tabs are highlighted, as well as some of the options for various styles.

The “Style” tab is a quick way to tailor the output in ways you might not be able to easily describe. Want an image that looks like it might grace the cover of an old Saturday Evening Post? Click the “Ads Fashion Editorial” checkbox. Each style has a small illustration of a cat in the selected style to help you pick, and there are tons of them to choose from. (Selecting “Terragen,” for example, gives you a nice landscape-y backdrop.) But you’re free to specify your own style in the prompt box: If you want a Viking princess (or Queen Elizabeth!) in the style of Gustav Klimt, give it a whirl.

While I’m not going to delve into the finer details of how to fine-tune prompts to tweak AI art, I do want to point out one advanced feature that might be worth playing with: Low-Rank Adaptations, or LoRAs. These are absolutely optional, and if you don’t want to deal with them, you can stop here. Enjoy!

Advanced work: Downloading additional LoRAs

You may have already played around with the filter options within Fooocus. LoRAs are even more specialized tools for specific types of art. They’re absolutely not necessary, but if you want to focus on a certain effect, adding a LoRA may allow the model a greater range of options. One featured LoRA I saw recently specifically focuses on lightning and lightning effects.

Put another way, a LoRA is just a plugin, like a browser extension for Fooocus. The site I use to find them is called Civitai.com, and there are a ton of LoRAs available for download. (You’ll need to sign up for a free account, and choose a number of content preferences. Some of the LoRA options cater to the NSFW, but you can filter those out.)

Civitai loras configuration dry paint
This is a neat artistic style that you can add to your model.

Mark Hachman / IDG

There are a couple of tricks. For now, you’ll need to filter the models by “SDXL,” the base model which Foocus runs on top of. You’ll also need to download or copy the LoRA into the appropriate directory, such as Fooocus\models\loras.

Once you download the additional LoRAs, you can turn them “on” within the Fooocus Advanced Menu (under the Model tab), and use them to influence your AI art output. Again, it requires some experimentation to see what works and what doesn’t.

Remember, AI art is deterministic, which means that (by default) the model is starting with a random example (or seed) each time. That means that there’s an element of randomness in AI art — you may need to try a few times to get a good result. If you do, you can dive deeper into the Fooocus documentation for learning how to upscale the art for either printing or a desktop background, editing it, or more.

And that’s it. Just remember: Try things out, and have fun!

via PCWorld https://www.pcworld.com

March 11, 2024 at 08:15AM

Valve’s strange history of talent acquisitions | This week’s gaming news

https://www.engadget.com/valves-strange-history-of-talent-acquisitions–this-weeks-gaming-news-153020318.html?src=rss

For Engadget’s 20th anniversary, we put together a package of stories about the most pivotal pieces of technology from the past two decades, and mine was on Steam. It’s difficult to overstate how influential Steam is to PC gaming, or how rich the storefront has made Valve. As a private company with infinite piles of Steam cash, Valve has the freedom to ignore market pressure from consumers, creators and competitors. It famously has a flat hierarchy with no strict management structure, and developers are encouraged to follow their hearts.

This has all resulted in an incredibly rich studio that doesn’t produce much. It may be a tired joke that Valve can’t count to three in its games, but we’re not talking about Half-Life today. We’re talking about Valve’s history of buying exciting franchises and talented developers, playing with them for a while, and then forgetting they exist. Real fuckboi behavior — but it’s just how Valve does business.

Let’s take a look at Valve’s history of talent acquisition. One of its oldest franchises, Team Fortress, started as a Quake mod built by a small team in Australia, and Valve bought its developers and the rights to the game in 1998. Team Fortress 2 came out in 2007 and it received a few good years of updates and support. Today, the game has a devoted player base, but it’s riddled with bots and it’s unclear whether anyone at Valve is consistently working on TF2.

Portal began life as a student project called Narbacular Drop, and Valve hired its developers after seeing their demo in 2005. Portal officially came out in 2007, Portal 2 landed in 2011, and both were instant classics. There hasn’t been a whiff of another Portal game since, even though one of the series writers, Erik Wolpaw, really, really wants Valve to make Portal 3.

Of all the Valve franchises that have been left to wither and die, I miss Left 4 Dead the most. Turtle Rock started building Left 4 Dead in 2005, and by the time that came out in 2008, Valve had purchased the studio and its IP outright. Citing slow progress and poor communication, Turtle Rock left Valve before helping the company make Left 4 Dead 2 in 2009. Turtle Rock went on to release Evolve in 2015 and Black 4 Blood in 2021, and is now owned by Tencent. Meanwhile, I’m here, dreaming of that third Left 4 Dead game.

In 2010, Valve secured the rights to the Warcraft III mod Defense of the Ancients, and hired its lead developer. Dota 2 came out in 2013 and became an incredibly successful esports title. Now, eleven years later, Dota 2 players are complaining about a lack of support and communication from Valve, especially in comparison with games like League of Legends.

Counter-Strike has received the most attention from Valve in recent memory, with the rollout of Counter-Strike 2 late last year. The original Counter-Strike was a Half-Life mod, and Valve acquired it and its developers in 2000. Counter-Strike 2 is the fifth installment in the series, released 11 years after its predecessor, Counter-Strike: Global Offensive. After this recent attention, it’s about time for Valve to start ignoring the Counter-Strike community again.

Valve has quietly continued to make acquisitions. In 2018, Valve hired all 12 developers at Firewatch studio Campo Santo, who were at the time working on a very-rad-looking new game, In the Valley of Gods. This could turn out to be another spectacular, genre-defining franchise for Valve’s resume of acquired IP, but there have been no updates from that team in nearly six years. In April 2018, Campo Santo said they were still building In the Valley of Gods at Valve, and promised regular blog posts and quarterly reviews. And then, nothing.

Matt Wood worked at Valve for 17 years, where he helped build Left 4 Dead, Left 4 Dead 2, Portal 2, CS:GO and both episodes of Half-Life 2. He left in 2019 and is now preparing to release his first independent game, Little Kitty, Big City. Wood told me in 2023 that Valve was “sitting on their laurels a little bit, and it’s like they weren’t really challenging themselves, taking risks or doing anything. Steam’s making a lot of money so they don’t really have to.”

Of course, Little Kitty, Big City is coming to Steam.

Steam’s unwavering success has helped turn Valve into a senior resort community for computer science nerds, where game developers go to live out their final years surrounded by fantastic amenities, tinkering and unsupervised. It’s a lovely scenario. At least developers there aren’t getting laid off — and I mean that sincerely. Steam is a great service, and Valve seems at least temporarily committed to the Steam Deck hardware, which is very cool. Still, I miss the games that Valve devoured. I have to wonder if the developers there do, too.

Valve’s treatment of legendary franchises and developers raises questions about its commitment to… anything, including Steam. What happens if Valve decides to pivot, or sell, or Gabe Newell retires and blows everything up? What would happen if Steam shut down? As a service with native DRM, all of our games would instantly disappear. Just like all those game devs.

This week’s news

Playdate update

Playdate is one of my favorite gaming gadgets of the past decade, not only because it has an incredibly cute crank, but also because its low-res screen belies a buffet of strange and beautiful experiences pushing the boundaries of traditional play. Panic held a showcase for new Playdate games last week and the headliner was Lucas Pope’s Mars After Midnight, which is coming out on March 12. Pope is the developer of Papers, Please and Return of the Obra Dinn, two incredible games, and Mars After Midnight is set in the doorway of a crowded alien colony. Pope’s games were made for Playdate, this time literally.

Yuzu and Citra are gone

A week after Nintendo threatened to sue the creators of Yuzu into oblivion, the popular Switch emulator has been pulled off the market as part of a $2.4 million settlement. To make matters worse for the emulation community, the lead developer of Yuzu announced that they are also killing the 3DS emulator Citra. Both emulators were open-source, so it’s likely we’ll see Citra at least maintained by the broader community. It’s not clear whether anyone is willing to take on a fork of Yuzu and risk a lawsuit.

Bonus Content

  • Ghost of Tsushima will hit PC on May 16. It comes with all of its DLCs, and Sony says it’ll run on anything from high-end PCs to portable PC gaming devices.

  • Capcom’s Kunitsu-Gami: Path of the Goddess is apparently coming out this year on PC, PlayStation and Xbox. It debuted at Summer Game Fest and looks pretty unique.

  • Hades hits iOS as a Netflix mobile exclusive on March 19. There are currently no plans for an Android version, which sucks for me.

Now Playing

I found This Bed We Made while doing research for the GLAAD Gaming report I covered a few weeks ago, and I’m incredibly pleased about it. This Bed We Made is an exploration and narrative-driven game set in a 1950s hotel, and it’s absolutely oozing drama and mystery. The writing is fantastic, the characters are complex, and there’s a thrilling storyline running through the whole thing. It’s available on PC and consoles now.

This article originally appeared on Engadget at https://ift.tt/4n3DAqH

via Engadget http://www.engadget.com

March 8, 2024 at 09:36AM