OpenAI, maker of ChatGPT and one of the most prominent artificial intelligence companies in the world, said today that it has entered a partnership with Anduril, a defense startup that makes missiles, drones, and software for the United States military. It marks the latest in a series of similar announcements made recently by major tech companies in Silicon Valley, which has warmed to forming closer ties with the defense industry.
“OpenAI builds AI to benefit as many people as possible, and supports US-led efforts to ensure the technology upholds democratic values," Sam Altman, OpenAI’s CEO, said in a statement Wednesday.
AI Lab Newsletter by Will Knight
WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.
OpenAI’s AI models will be used to improve systems used for air defense, Brian Schimpf, co-founder and CEO of Anduril, said in the statement. “Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations,” he said.
OpenAI’s technology will be used to “assess drone threats more quickly and accurately, giving operators the information they need to make better decisions while staying out of harm’s way,” says a former OpenAI employee who left the company earlier this year and spoke on the condition of anonymity to protect their professional relationships.
OpenAI altered its policy on the use of its AI for military applications earlier this year. A source who worked at the company at the time says some staff were unhappy with the change, but there were no open protests. The US military already uses some OpenAI technology, according to reporting by The Intercept.
Anduril is developing an advanced air defense system featuring a swarm of small, autonomous aircraft that work together on missions. These aircraft are controlled through an interface powered by a large language model, which interprets natural language commands and translates them into instructions that both human pilots and the drones can understand and execute. Until now, Anduril has been using open-source language models for testing purposes.
Anduril is not currently known to be using advanced AI to control its autonomous systems or to allow them to make their own decisions. Such a move would be more risky, particularly given the unpredictability of today’s models.
A few years ago, many AI researchers in Silicon Valley were firmly opposed to working with the military. In 2018, thousands of Google employees staged protests over the company supplying AI to the US Department of Defense through what was then known within the Pentagon as Project Maven. Google later backed out of the project.
Electric vehicle charging and infrastructure are two major obstacles to EV adoption on a mass scale, but Mercedes-Benz may have a solution in the form of solar paint. In an effort to increase efficiency, the German automaker has created a new solar coating that could cover future electric models. Mercedes-Benz’s solar coating could revolutionize EV charging, making it more convenient to own an EV or PHEV.
Mercedes-Benz’s solar paint is just 5 micrometers thick
Other companies have explored charging EVs via solar power but, for the most part, their solution was in the form of solar modules. Solar modules on vehicles presented a myriad of problems, though, namely in their lack of flexibility and fragility in the event of an accident.
Mercedes-Benz’s solar coating can cover the entire body of the car as opposed to just the roof or sides. The coating weighs in at just 50 grams per square meter and measures just five micrometers thick. It can also be applied to any surface, including panel creases and curved fenders.
According to Mercedes engineers, the solar paint currently operates at 20% efficiency, the same efficiency as the solar cells that are currently used on vehicles. In addition to being more flexible than solar panels, solar paint is always active and can charge an EV battery at all times, provided there’s sunlight. Currently, it could generate enough energy to add 34 miles to an EV per day in sunny areas.
Mercedes is still developing this new technology, so they have left some details to speculation, such as how it would be applied to vehicles. Notably, however, the German automaker said the solar coating won’t be painted over a vehicle’s finish.
Solar paint could produce up to 7,456 miles per year
Mercedes’ new solar paint technology could generate enough power to add thousands of miles per year in driving range, even while operating at 20% efficiency. Since the coating can cover the entire vehicle, a coating with an area of 118.4 square feet, which is about the size of a midsize SUV, could produce more than 7,450 miles of driving range per year.
According to Mercedes, drivers in Stuttgart, Germany drive around 32 miles a day. Vehicles equipped with the German automaker’s solar paint are covering around 60% of that distance with solar energy. In sunnier areas, like Los Angeles, the abundance of solar energy would allow for even higher rates of energy production.
Solar paint comes with more benefits than just increased durability and flexibility, making it an ideal solution for extending an EV’s range. It’s non-toxic, easy to recycle, and cheaper to create than standard solar modules. It also doesn’t contain any rare earth metals and only uses raw materials that are easily accessible.
Final thoughts
A lack of charging infrastructure is crippling the adoption of EVs nationwide, but it’s especially apparent in the United States. Alternative methods of energy harnessing could help alleviate range anxiety, increase an EV’s driving distance, and reduce charging costs across the board. Not only that but considering the cost of producing Mercedes’ solar coating and the lack of rare earth metals, it could be the leading solution to charging concerns.
Other companies, like Sono Motors and Aptera, have either tried or will be trying solar module integration on their vehicles. Meanwhile, Mercedes’s solar coating is already producing results in real-world application tests. While the German automaker says the solar paint isn’t ready for production on a mass scale, research, and development are progressing at a steady rate. If all goes well, we’ll hopefully see solar coating as an accessory EV charging solution within the next decade.
Did you know that Microsoft Edge, the built-in web browser on Windows PCs, can read text out loud to you? It works for text on web pages, PDF and Word documents, and more — an incredibly useful feature, particularly for users who have impaired vision.
Here’s how it works: In Edge, navigate to the web page with the content you want to listen to. Highlight a selection of text using your mouse cursor, then right-click and select Read selection aloud in the menu.
Once you do that, a toolbar will appear at the top of the browser. Here, you can click buttons to Pause/Play reading and to skip backward/forward through the selection line by line. The toolbar also has a Voice options button, which you can use to change the reading speed and the text-to-speech voice.
Alternatively, you can have Edge read out the entire web page — not just a selection of text — by click the A icon (in the address bar) that has sound waves coming out of it:
IDG
This read aloud feature also works for other content types that can be opened in the browser. For example, you can use the Ctrl + O keyboard shortcut to launch the “Open with” dialog, then select the respective file (perhaps a PDF document) you want to open in Edge. Or you can drop and drop the file directly into the Edge window.
For Word documents in particular, you can open the document in Word, then select File > Save as… to save the document as type “Web page.” Then, you can open the resulting web page in Edge.
Following YouTube Music and Apple Music, Spotify is now out with their yearly recap feature. Called “Spotify Unwrapped,” this is the time for Spotify users to find their top songs of the past year, see what else was trending, and live life through the past by listening to”Your Top Songs 2024? playlist on repeat for the next month or so as we head into 2025.
For this 2024 Unwrapped, Spotify has brought back familiar features, like the “Your 2024 Wrapped” that takes you through a short history lesson of your past year. This story-like experience will reveal your top song, list of top artists, how many minutes you spent listening to music, how your music tastes changed throughout the past 12 months, and where you ranked for your top artist in terms of listening time. Some of your favorite artists may even have a little video message for you to take in.
Spotify has also put together several 2024 recap-style playlists with the top songs from the globe and your country, as well as top playlists for specific genres. The one I’m most looking forward to diving into is the “Your Music Evolution 2024,” which I think is a new item for this year’s Wrapped.
The headline feature, at least according to Spotify and Google, is the “Your Wrapped AI Podcast.” This feature uses Google’s NotebookLM to create a podcast experience about your Spotify Wrapped. It’ll probably be super silly, but this is one of Google’s coolest AI features. NotebookLM can take almost any subject you might be interested in and then turn it into what ends up sounding like a real podcast with two hosts going back and forth. It’s actually a wild experience outside of Spotify that I recommend giving a try.
My Wrapped AI podcast is a little over 3 minutes and mostly just walks through my Wrapped slideshow. As someone who doesn’t listen to any podcasts, this is not for me. But hey, so many of you love podcasts and I’m sure this will find a place in your commutes.
To get Spotify Wrapped 2024, make sure you have the latest Spotify update. If you do, you should see it all by opening the app today.
Amazon is building one of the world’s most powerful artificial intelligence supercomputers in collaboration with Anthropic, an OpenAI rival that is working to push the frontier of what is possible with artificial intelligence. When completed, it will be five times larger than the cluster used to build Anthropic’s current most powerful model. Amazon says it expects the supercomputer, which will feature hundreds of thousands of Amazon’s latest AI training chip, Trainium 2, to be the largest reported AI machine in the world when finished.
Matt Garman, the CEO of Amazon Web Services, revealed the supercomputer plans, dubbed project Rainer, at the company’s Re:Invent conference in Las Vegas today, along with a host of other announcements cementing Amazon’s rising dark-horse status in the world of generative AI.
AI Lab Newsletter by Will Knight
WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.
Garman also announced that Tranium 2 will be made generally available in so-called Trn2 UltraServer clusters specialized for training frontier AI. Many companies already use Amazon’s cloud to build and train custom AI models, often in tandem with GPUs from Nvidia. But Garman said that the new AWS clusters are 30 to 40 percent cheaper than those that feature Nvidia’s GPUs.
Amazon is the world’s biggest cloud computing provider, but until recently, it might have been considered a laggard in generative AI compared to rivals like Microsoft and Google. This year, however, the company has poured $8 billion into Anthropic, and it has quietly pushed out a range of tools through an AWS platform called Bedrock to help companies harness and wrangle generative AI.
At Re:Invent, Amazon also showcased its next-generation training chip, Trainium 3, which it says will offer four times the performance of its current chip. It will be available to customers in late 2025.
“The numbers are pretty astounding” for the next-generation chip, says Patrick Moorhead, CEO and chief analyst at Moore Insight & Strategy. Moorhead says that Trainium 3 appears to have received a significant performance boost from an improvement in the so-called interconnect between chips. Interconnects are critical in developing very large AI models, as they enable the rapid transfer of data between chips, a factor AWS seems to have optimized for in its latest designs.
Nvidia may remain the dominant player in AI training for a while, Moorehead says, but it will face increasing competition in the next few years. Amazon’s innovation “shows that Nvidia is not the only game in town for training,” he says.
Intel heard your screams of anguish, PC gamers. Budget graphics cards that are actually worth your money have all but disappeared this pandemic/crypto/AI-crazed decade, with modern “budget” GPUs going for $300 or more, while simultaneously being nerfed by substandard memory configurations that limit your gaming to 1080p resolution unless you make some serious visual sacrifices.
No more.
Today, Intel announced the $249 Arc B580 graphics card (launching December 13) and $219 Arc B570 (January 16), built using the company’s next-gen “Battlemage” GPU architecture. The Arc B580 not only comes with enough firepower to best Nvidia’s GeForce RTX 4060 in raw frame rates, it has a 12GB memory system target-built for 1440p gaming – something the 8GB RTX 4060 sorely lacks despite costing more.
Intel
As if that wasn’t an appealing enough combination (did I mention this thing is $249?!), Intel is upping the ante with XeSS 2, a newer version of its AI super-resolution technology that adds Nvidia DLSS 3-like frame generation for even more performance, as well as Xe Low Latency (XeLL), a feature that can greatly reduce latency in supported games.
Add it all up and Intel’s Arc B580 seems poised to really, truly shake things up for PC gamers on a budget – something we haven’t seen in years and years. If you’re still rocking an OG GTX 1060, take a serious look at this upgrade. Let’s dig in.
Intel’s debut “Alchemist” Arc GPUs launched in late 2022, rife with all the bugs and issues you’d expect from the first generation of a product as complex as modern graphics cards. Intel diligently ironed those out over the subsequent months, delivering driver updates that supercharged performance and squashed bugs at a torrid pace.
In a briefing with press, Intel Fellow Tom Petersen said a major force during Battlemage’s development was improving software efficiency, to be better able to unleash the full power of Intel’s hardware. But remember, it ran on first-gen hardware, too. Battlemage improves efficiency on that front, using tricks like transforming the vector engines from two slices into a single structure, supporting native SIMD16 instructions, and beefing up the capabilities of the Xe core’s ray tracing and XMX AI instructions to, yes, make everything run smoother and better than before.
intel
I’ve included a bunch of technical slides above, so nerds can pick through the details. But here’s the upshot: The Arc B580 delivers 70 percent more performance per Xe core than last gen’s Arc A750, and 50 percent more performance per watt, per Intel.
Cue Keanu Reeves: Whoa. That’s absolutely bonkers. You almost never see performance leaps that substantial from a single-generation advance anymore!
Intel
That’s at an architectural level; the slide above shows the specific hardware configurations found in the Intel B580 and B570. A couple of things stand out here, first and foremost the memory configuration.
Nvidia and AMD’s current $300 gaming options come with just 8GB of VRAM, tied to a paltry 128-bit bus that all but forces you to play at 1080p resolution. The Arc B580 comes with an ample 12GB of fast GDDR6 memory over a wider 192-bit bus – so yes, this GPU is truly built for 1440p gaming, unlike its rivals. The Arc B570 cuts things down a bit to hit its $219 price tag but the same broad strokes apply.
Also worth noting: Intel’s new GPUs feature a bog standard 8-pin power connector (though third-party models may add a second one to support Battlemage’s ample overclocking chops). No fumbling with fugly 12VHPWR connectors here.
Intel’s homebrew Limited Edition reference GPUs will return for the B580 in a newer, smaller design with blow-through cooling. You’ll also be able to pick up third-party custom cards from the partners shown above, and the B570’s launch in January will be exclusive to custom boards, with no Limited Edition reference planned.
Intel
As part of the launch, Intel is also introducing a redesigned gaming app with advanced overclocking capabilities, including the ability to tweak voltage and frequency offsets.
Intel Arc B580 performance details
Now let’s dig into actual performance, using Intel’s supplied numbers.
Intel says the $249 Arc B580 plays games an average of 25 percent faster than last generation’s higher-tier $279 Arc A770 across a test suite of 40 games. Compared to the competition, Intel says the Arc B580 runs an average of 10 percent faster than Nvidia’s RTX 4060 – though crucially, those numbers were taken at 1440p resolution rather than the 1080p resolution the overly nerfed RTX 4060 works best at.
Intel
Intel also made a point of stressing how the RTX 4060’s limited 8GB of RAM over a 128-bit bus can directly impact performance today. The slide above shows Forza Motorsport running at 1440p resolution. At standard High settings, the RTX 4060 actually holds a performance advantage. As you scale up the stressors, flipping on ray tracing and moving to Ultra settings, the advantage instantly flips, with the B580 taking a clear, substantial lead while the RTX 4060 hits the limits of what’s possible with its memory setup.
Speaking of, Intel says most of the key technologies underlying ray tracing have been improved by 1.5x to 2x in Battlemage compared to the first-gen Arc “Alchemist” offerings. Considering that Intel’s debut Arc cards already went toe-to-toe with Nvidia’s vaunted RTX 40-series ray tracing, there could be a fierce battle brewing in realistic real-time lighting next year – which isn’t something I’d thought I’d say in the $250 segment before even flipping the calendar to 2025. If you’re still rocking a GTX 1060 or 1650 from back in the day, the Arc B580 would be a massive upgrade in both speed and advanced features like ray tracing.
Raw hardware firepower alone is only part of the graphics equation these days, however. Nvidia’s RTX technology forced the power of AI upscaling and frame generation into consideration this decade – and Intel’s new software features are designed to supercharge frame rates and lower latency even further.
Meet XeSS 2 and Xe Low Latency
Intel’s XeSS technology debuted alongside the first-gen Arc cards, serving as an AI upscaling rival to Nvidia’s core DLSS technology. (These render frames at a lower resolution internally, then use AI to supersample the final result, leading to higher performance with little to no loss in visual quality.) But then Nvidia launched DLSS 3, a technology that injects AI-generated “interpolated” frames between every GPU-rendered frame, utterly turbocharging performance in many games and scenarios.
XeSS 2 is Intel’s response to that. While DLSS 3 requires the use of a hardware Optical Flow Accelerator only present in RTX 40-series GPUs, Intel’s XeSS 2 uses AI and Arc’s XMX engines to do the work instead – meaning it’ll also work on previous-gen Arc cards, and the Xe-based integrated graphics found in Intel’s Lunar Lake laptops.
Intel
And as we see with DLSS 3, the performance improvements can be outstanding. Intel says that in its in-house F1 24 tests with the B580, activating XeSS 2 with supersampling and frame generation can improve performance by a whopping 2.8x to 3.9x, depending on the Quality setting used. While the game runs at 48fps at the chosen settings without XeSS 2 enabled, turning on XeSS 2’s Ultra quality lifts that all the way up to 186fps – a literal game changer.
Intel
Support for XeSS 2 is coming to the games shown above, with more to arrive in the coming months. First-gen XeSS hit 150 games to date, so the hope is that XeSS 2 (which uses different APIs for developers to hook into) ramps quickly as well.
Injecting AI frames between tradition frames has a side effect though – it increases latency, or the reaction between your mouse click and the action occurring onscreen, because the interpolated AI frames can’t respond to your commands. Enter Intel’s Xe Low Latency feature.
XeLL essentially cuts out a bunch of the ‘middleman’ rendering and logic queues that happen behind the scenes in a frame, letting your GPU render a frame much, much faster than typical. (Nvidia’s awesome Reflex technology works similarly.) Activating it drastically lowers latency. You can tangibly feel the improvement in games that don’t have frame generation active, but by enabling it alongside XeSS 2, it claws back the latency created by frame generation.
You can witness the improvements possible in the slide below, which shows the performance of an F1 24 frame with a variety of XeSS features (supersampling, frame gen, XeLL) active. It really illustrates the need for a latency-reduction feature alongside frame generation.
Intel
Latency reduction is so critical to frame gen “feeling right” that Intel requires developers to include XeLL as part of the wider XeSS 2 package, following in Nvidia’s footsteps. As with DLSS 3 and Reflex, you may see the options presented separately in some games, while others will silently enable them together – it’s up to the developer.
Battlemage brings the heat?
Always take vendor numbers with a big punch of salt. We’ve seen vendor benchmark controversies over the years, including this year. Corporate marketing exists to sell stuff to you first and foremost. Hashtag: Wait for benchmarks et cetera et cetera.
All that said, while Battlemage doesn’t push for the bleeding edge of performance, I’m wildly excited by what I see on paper here. Budget GPUs have been an absolute quagmire ever since the pandemic, with none of the current Nvidia or AMD offerings being very compelling. They feel like rip-offs.
Intel
Intel’s Arc B580 and B570 feel like genuine value offerings, finally giving gamers without deep pockets an enticing 1440p option that’s actually affordable – something we haven’t seen this decade despite 1440p gaming becoming the new norm. Delivering better-than-4060 performance and 12GB of VRAM for $250 is downright killer if Intel hits all its promises, especially paired with what looks to be a substantial increase to Arc’s already-good ray tracing performance. And with XeSS 2 and XeLL, Intel is keeping pace with Nvidia’s advanced features – assuming developers embrace it as wholeheartedly as they’ve done with first-gen XeSS.
Add it all up and I’m excited for a truly mainstream GPU for the first time in a long time. The proof is in the pudding (again, wait for independent benchmarks!) but Intel seems to be brewing up something spicy indeed with Battlemage and the Arc A580.
Like everywhere else on the internet, LinkedIn is awash in AI-generated content. It’s a perfect fit. As first reported by Wired, a new study has found that more than half of the posts on LinkedIn were constructed using some form of generative AI. Anyone who has spent any amount of time on LinkedIn won’t be shocked.
Wired had exclusive access to a study performed by AI detection startup Originality AI. According to the publication, Originality scanned 8,795 public English LinkedIn posts that are more than 100 words long and published from January 2018 to October 2024. Of those, 54 percent were likely AI-generated. According to the study, there was a huge spike in 2023 when OpenAI released ChatGPT but it’s leveled off.
LinkedIn is a social media site aimed at helping people get a job and build a professional network. Interactions on the site have long felt like an unnecessary corporate meeting or sterile job interview. The site has been steeped in corporate culture and stilted corporate speech—that kind of dittoing aggressively bland talk that’s drained of all color and joy. It’s the kind of writing LLMs are perfect at replicating.
In the corporate world, it’s best to talk in buzzwords and jargon. LinkedIn even has a tool built in for premium subscribers that lets them cut out third-party sites like ChatGPT. After entering a minimum of 20 words into a post, subscribers can click a button and use AI to repackage their corporate content for the world.
In a world where pictures of shrimp Jesus are offending us on Facebook and grotesque Musk-as-chad pictures flood X, AI has found its perfect home on LinkedIn. But not all are happy. “Some people engaged positively, appreciating the clarity and structure of the posts. Others were skeptical or critical, often focusing on the fact that AI was involved rather than the content itself,” Entrepreneur Zack Fosdyck told Wired. “I find it fascinating how polarizing this technology can be, especially since tools like calculators or spellcheck, which are also forms of assistance, are widely accepted.”
The difference is that calculators and spellcheck do not serve to substitute and replace basic human interaction. Context matters too. It’s impossible for an AI-generated LinkedIn post to offend me. But if I caught a friend using Google’s new systems to generate a personal response to a text message? I’d be pissed.
Yesterday, Lance Eliot—a “world-renowned AI scientist” who once appeared on 60 Minutes—published an op-ed on Forbes that advocated for using ChatGPT to make Thanksgiving peaceful. Why bother engaging with your family when you can have an AI do it for you?
The post reads like ChatGPT wrote it. It’s got all the hallmarks: a lead that sounds like it’s written to satisfy a high school English class grading rubric, bullet points that walk through the essay’s talking points, and calls to action that focus on the non-controversial. At the end of the essay, Eliot offers a final piece of advice for those with an angry turkey-day guest who just wants to argue.
“A last resort might be to ask the person to go somewhere that offers solitude at your event and have them argue with generative AI,” he says. “Have the person engage in their heated argument with AI. They can do this until the cows come home. It might allow them to vent their anger. The AI can take it, don’t worry about that. Once they’ve done all their chirping and whirling, they can rejoin the group if they are going to henceforth be peaceful and thankful.”
I like to think (and the sooner the better) of a cybernetic meadow where people who would exile difficult people into a room to battle with generative AI are themselves exiled to a land of AI-generated LinkedIn posts. Let the Eliots of the world retreat from the complexities of life into a land of corporate speak and ChatGPT-led interactions.
Give me the meat of human interaction. I want the fights over politics with difficult relatives, the anger and sadness of genuine human conversation, and all the joys and pains that come with it. Let the anodyne world of LLMs live on LinkedIn. Do not bring it into your life or your home.