The Race to Create the Perfect EV Tire

https://www.wired.com/story/the-race-to-create-the-perfect-ev-tire/

In 1845, somewhere between inventing a system for detonating explosives by electricity and the refillable fountain pen, Robert Thomson, a Scottish engineer and entrepreneur, patented the first pneumatic tire—a then wondrous, now everyday item that has been gradually evolving ever since.

Now, in the era of electric vehicles, they are more in focus then ever before. On the one hand, while passenger safety remains a priority, the right tires can have a significant effect on efficiency—and thereby the range of your EV—but on the other they’re also a source of noise and pollution.

Gear Newsletter: Reviews, Guides, and Deals

Upgrade your life with our buying guides, deals, and how-to guides, all tested by experts.

Since the traditional global tire market is worth well over $200 billion, and 2.5 billion tires are sold a year worldwide, manufacturers are rubbing their hands at the coming death of pure combustion cars, and gearing up for a battle to fashion the ideal balance of eco credentials, performance, and efficiency that will create the perfect EV tire. Whoever wins will secure quite the prize.

Rolling Resistance or Longevity?

Range optimization has been the primary concern so far. According to Michelin, the efficiency difference between good and bad tires can be as much as 7 percent. Better tires reduce rolling resistance, meaning a car will coast further before coming to a stop. It will therefore need less energy to travel the same distance. A 7 percent increase in efficiency will give an EV that much more range—so, if it could go 300 miles with a poor tire, it will travel 321 miles with a good one.

“There are several tire components that can influence rolling resistance,” says Thomas Wanka, principal technology development engineer at Continental, a company that has been exploring EV tire design through its association with electric motorsport series Extreme E. “These include the rubber compound and the tread.”

Manufacturers are experimenting with nanomaterials in their tires, such as nanocarbon and nanosilica, to improve performance, traction, and durability. There is also research into bio-based alternative compounds such as guayule and dandelion rubber.

You can reduce rolling resistance by reducing tread depth, but this also means the tire won’t last so long and produces increased noise. Continental, however, thinks it has the answer. “We have developed special soft rubber compounds that allow us to reduce rolling resistance and noise at the same time without sacrificing mileage,” says Wanka.

via Wired Top Stories https://www.wired.com

November 15, 2024 at 06:34AM

The Norwegian Company Blamed for California’s Hydrogen Car Woes

https://www.wired.com/story/the-norwegian-company-blamed-for-californias-hydrogen-car-woes/

A California court has advanced a civil fraud case against a Norwegian company at the center of the state’s failure to build workable hydrogen fueling infrastructure, which has already left thousands of car owners in the lurch.

A case involving allegations of fraud against Oslo-based Nel ASA is moving toward a trial in October 2026 after a California judge left intact the core claims brought by a major player in the rollout of hydrogen infrastructure in the state, Iwatani Corporation of America, a subsidiary of one of Japan’s largest industrial gas companies.

AI Lab Newsletter by Will Knight

WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.

The allegations center on a lesser-known aspect of the blundered roll-out: Iwatani is claiming that Nel duped it into buying faulty hydrogen fueling stations. And the case has provided a window into the extent to which these same stations were provided to and promoted by major players like Toyota and Shell—stations that have since been abandoned or shut down.

The judge’s ruling last month leaves Nel and its top executives—including current and former CEOs Robert Borin and Håkon Volldal—in the crosshairs. Iwatani’s central claim is that Nel, under pressure to sell a money-losing product, knowingly induced Iwatani into purchasing untested hydrogen fueling stations with false assurances of the technology’s real-world readiness.

Nel denies the allegations, and has put forward procedural arguments to get the case thrown out, saying that California does not have jurisdiction over the company or its executives.

In separate rulings, Judge James Selna of the Central District of California sided with Iwatani on the core claims, while dismissing several others, finding that California does in fact have jurisdiction and that the allegations go beyond a simple breach of contract and into the realm of fraud in selling the equipment, known as H2Stations.

The judge ruled that there was “active concealment,” citing examples including that Nel did not disclose the fact it that it had never built a working model of the H2Station nor sufficiently tested it in real-world conditions, and had no actual data to support their H2Stations’ performance claims.

After the lawsuit was filed in January, Nel abandoned the seven Iwatani hydrogen fueling stations and executed a corporate spinout of its fueling division—which Iwatani claims is a means of shielding those assets from a potential court judgment.

via Wired Top Stories https://www.wired.com

November 19, 2024 at 07:07AM

How the largest gathering of US police chiefs is talking about AI

https://www.technologyreview.com/2024/11/19/1106979/how-the-largest-gathering-of-us-police-chiefs-is-talking-about-ai/

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

It can be tricky for reporters to get past certain doors, and the door to the International Association of Chiefs of Police conference is one that’s almost perpetually shut to the media. Thus, I was pleasantly surprised when I was able to attend for a day in Boston last month. 

It bills itself as the largest gathering of police chiefs in the United States, where leaders from many of the country’s 18,000 police departments and even some from abroad convene for product demos, discussions, parties, and awards. 

I went along to see how artificial intelligence was being discussed, and the message to police chiefs seemed crystal clear: If your department is slow to adopt AI, fix that now. The future of policing will rely on it in all its forms.

In the event’s expo hall, the vendors (of which there were more than 600) offered a glimpse into the ballooning industry of police-tech suppliers. Some had little to do with AI—booths showcased body armor, rifles, and prototypes of police-branded Cybertrucks, and others displayed new types of gloves promising to protect officers from needles during searches. But one needed only to look to where the largest crowds gathered to understand that AI was the major draw. 

The hype focused on three uses of AI in policing. The flashiest was virtual reality, exemplified by the booth from V-Armed, which sells VR systems for officer training. On the expo floor, V-Armed built an arena complete with VR goggles, cameras, and sensors, not unlike the one the company recently installed at the headquarters of the Los Angeles Police Department. Attendees could don goggles and go through training exercises on responding to active shooter situations. Many competitors of V-Armed were also at the expo, selling systems they said were cheaper, more effective, or simpler to maintain. 

The pitch on VR training is that in the long run, it can be cheaper and more engaging to use than training with actors or in a classroom. “If you’re enjoying what you’re doing, you’re more focused and you remember more than when looking at a PDF and nodding your head,” V-Armed CEO Ezra Kraus told me. 

The effectiveness of VR training systems has yet to be fully studied, and they can’t completely replicate the nuanced interactions police have in the real world. AI is not yet great at the soft skills required for interactions with the public. At a different company’s booth, I tried out a VR system focused on deescalation training, in which officers were tasked with calming down an AI character in distress. It suffered from lag and was generally quite awkward—the character’s answers felt overly scripted and programmatic. 

The second focus was on the changing way police departments are collecting and interpreting data. Rather than buying a gunshot detection tool from one company and a license plate reader or drone from another, police departments are increasingly using expanding suites of sensors, cameras, and so on from a handful of leading companies that promise to integrate the data collected and make it useful. 

Police chiefs attended classes on how to build these systems, like one taught by Microsoft and the NYPD about the Domain Awareness System, a web of license plate readers, cameras, and other data sources used to track and monitor crime in New York City. Crowds gathered at massive, high-tech booths from Axon and Flock, both sponsors of the conference. Flock sells a suite of cameras, license plate readers, and drones, offering AI to analyze the data coming in and trigger alerts. These sorts of tools have come in for heavy criticism from civil liberties groups, which see them as an assault on privacy that does little to help the public. 

Finally, as in other industries, AI is also coming for the drudgery of administrative tasks and reporting. Many companies at the expo, including Axon, offer generative AI products to help police officers write their reports. Axon’s offering, called Draft One, ingests footage from body cameras, transcribes it, and creates a first draft of a report for officers. 

“We’ve got this thing on an officer’s body, and it’s recording all sorts of great stuff about the incident,” Bryan Wheeler, a senior vice president at Axon, told me at the expo. “Can we use it to give the officer a head start?”

On the surface, it’s a writing task well suited for AI, which can quickly summarize information and write in a formulaic way. It could also save lots of time officers currently spend on writing reports. But given that AI is prone to “hallucination,” there’s an unavoidable truth: Even if officers are the final authors of their reports, departments adopting these sorts of tools risk injecting errors into some of the most critical documents in the justice system. 

“Police reports are sometimes the only memorialized account of an incident,” wrote Andrew Ferguson, a professor of law at American University, in July in the first law review article about the serious challenges posed by police reports written with AI. “Because criminal cases can take months or years to get to trial, the accuracy of these reports are critically important.” Whether certain details were included or left out can affect the outcomes of everything from bail amounts to verdicts. 

By showing an officer a generated version of a police report, the tools also expose officers to details from their body camera recordings before they complete their report, a document intended to capture the officer’s memory of the incident. That poses a problem. 

“The police certainly would never show video to a bystander eyewitness before they ask the eyewitness about what took place, as that would just be investigatory malpractice,” says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, who will soon publish work on the subject. 

A spokesperson for Axon says this concern “isn’t reflective of how the tool is intended to work,” and that Draft One has robust features to make sure officers read the reports closely, add their own information, and edit the reports for accuracy before submitting them.

My biggest takeaway from the conference was simply that the way US police are adopting AI is inherently chaotic. There is no one agency governing how they use the technology, and the roughly 18,000 police departments in the United States—the precise figure is not even known—have remarkably high levels of autonomy to decide which AI tools they’ll buy and deploy. The police-tech companies that serve them will build the tools police departments find attractive, and it’s unclear if anyone will draw proper boundaries for ethics, privacy, and accuracy. 

That will only be made more apparent in an upcoming Trump administration. In a policing agenda released last year during his campaign, Trump encouraged more aggressive tactics like “stop and frisk,” deeper cooperation with immigration agencies, and increased liability protection for officers accused of wrongdoing. The Biden administration is now reportedly attempting to lock in some of its proposed policing reforms before January. 

Without federal regulation on how police departments can and cannot use AI, the lines will be drawn by departments and police-tech companies themselves.

“Ultimately, these are for-profit companies, and their customers are law enforcement,” says Stanley. “They do what their customers want, in the absence of some very large countervailing threat to their business model.”


Now read the rest of The Algorithm

Deeper Learning

The AI lab waging a guerrilla war over exploitative AI

When generative AI tools landed on the scene, artists were immediately concerned, seeing them as a new kind of theft. Computer security researcher Ben Zhao jumped into action in response, and his lab at the University of Chicago started building tools like Nightshade and Glaze to help artists keep their work from being scraped up by AI models. My colleague Melissa Heikkilä spent time with Zhao and his team to look at the ongoing effort to make these tools strong enough to stop AI’s relentless hunger for more images, art, and data to train on.  

Why this matters: The current paradigm in AI is to build bigger and bigger models, and these require vast data sets to train on. Tech companies argue that anything on the public internet is fair game, while artists demand compensation or the right to refuse. Settling this fight in the courts or through regulation could take years, so tools like Nightshade and Glaze are what artists have for now. If the tools disrupt AI companies’ efforts to make better models, that could push them to the negotiating table to bargain over licensing and fair compensation. But it’s a big “if.” Read more from Melissa Heikkilä.

Bits and Bytes

Tech elites are lobbying Elon Musk for jobs in Trump’s administration

Elon Musk is the tech leader who most has Trump’s ear. As such, he’s reportedly the conduit through which AI and tech insiders are pushing to have an influence in the incoming administration. (The New York Times)

OpenAI is getting closer to launching an AI agent to automate your tasks

AI agents—models that can do tasks for you on your behalf—are all the rage. OpenAI is reportedly closer to releasing one, news that comes a few weeks after Anthropic announced its own. (Bloomberg)

How this grassroots effort could make AI voices more diverse

A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative. (MIT Technology Review

Google DeepMind has a new way to look inside an AI’s “mind”

Autoencoders let us peer into the black box of artificial intelligence. They could help us create AI that is better understood and more easily controlled. (MIT Technology Review)

Musk has expanded his legal assault on OpenAI to target Microsoft

Musk has expanded his federal lawsuit against OpenAI, which alleges that the company has abandoned its nonprofit roots and obligations. He’s now going after Microsoft too, accusing it of antitrust violations in its work with OpenAI. (The Washington Post)

via Technology Review Feed – Tech Review Top Stories https://ift.tt/rCOqjJk

November 19, 2024 at 04:08AM

Meet dAIsy, the AI Grandma Who Scammers Wish They’d Never Called

https://www.geeksaresexy.net/2024/11/15/meet-daisy-the-ai-grandma-who-scammers-wish-theyd-never-called/

Daisy the AI Grandma

Meet dAIsy, the AI grandma who’s single-handedly turning the tables on scammers—and doing it with all the charm of a sweet old lady who’s just had her third cup of tea. Created by O2, Daisy isn’t here to knit sweaters or bake cookies. No, Daisy is here to talk. And talk. And talk some more. She’s the ultimate scammer repellant, armed with an endless supply of rambling nonsense and a voice so convincing you’d swear she’s about to ask you if you’re eating enough vegetables.

Here’s how she operates: when a scammer calls, Daisy picks up with all the warmth of a loving grandma who just loves to chat. Got a question about her bank account? Oh, you’ll get an answer all right—after she tells you about her nephew’s wedding, her cat’s peculiar eating habits, and why they don’t make tea kettles like they used to. By the time she’s done, the scammer will have aged faster than their victim ever could.

And it works! O2 reports that Daisy has kept some scammers on the phone for an astonishing 40 minutes. That’s nearly an episode of Bake Off spent listening to her passion for knitting scarves for pigeons. One unlucky fraudster reportedly hung up after Daisy gave him some “personal” details—like a bank account number that spelled out “NO-MONEY-FOR-YOU.”

The genius behind Daisy is a custom-trained AI that’s programmed to generate lifelike responses in real time. She hears what the scammer says, cooks up a response with the cunning of a master troll, and speaks back in a voice that would convince anyone she’s about to invite them over for tea and biscuits. It’s like ChatGPT, but if ChatGPT had a fondness for tangents about gardening.

Murray Mackenzie, Director of Fraud at Virgin Media O2, describes Daisy as a scammer’s worst nightmare. “We’re essentially weaponizing British politeness,” he said, probably while sipping tea and feeling very pleased with himself. Daisy doesn’t just waste scammers’ time—she actively ruins their day. And for the millions of people constantly worried about falling victim to fraud, that’s a win.

If you’re in the UK and get a scammy call, you can report it to 7726 for free, and maybe Daisy will step in to “help.” Just picture the scammer furiously taking notes as Daisy prattles on about how she thinks her account number starts with a 4…or maybe it’s a 7…or was that her library card?

So, here’s to Daisy, the AI grandma we didn’t know we needed. Scammers beware: she’s got all day, a never-ending supply of nonsense, and absolutely no filter.

Click This Link for the Full Post > Meet dAIsy, the AI Grandma Who Scammers Wish They’d Never Called

via [Geeks Are Sexy] Technology News https://ift.tt/Vhr62Sc

November 15, 2024 at 08:51AM