How Sensors Using Quantum Entanglement Could Improve Earthquake Detection

https://www.discovermagazine.com/the-sciences/how-sensors-using-quantum-entanglement-could-improve-earthquake-detection


One of the scariest things about an earthquake is not how much damage it creates but when and where it will strike next. The start of 2023 has already brought significant tremor activity, with February quakes in Turkey and Syria killing tens of thousands of people.

Many experts predict this type of destructive earthquake activity will only continue, threatening other at-risk areas around the globe.

Although scientists cannot predict when an earthquake may strike, many are developing sensitive devices that could improve earthquake detection. One such device is the quantum sensor.

Suspending atoms at ultra-cold temperatures (near 0 degrees Kelvin) in laser arrays, quantum sensors can detect minute changes in gravitational waves while becoming even more sensitive when quantum entangled.

While this setup offers more thorough data for improving earthquake models, it can be costly.

How to Measure an Earthquake

Current detection methods leverage a network of seismographs around the globe. At each network node, any tremor, or even a rock slipping, could trigger a measurement from the seismograph.

“Current systems are made up of accelerometers [seismographs] that detect the earliest seismic wave arrivals, which are pressure waves [p-waves] that move the ground but aren’t as destructive as subsequent shear waves [s-waves] which travel slower,” says Daniel Boddice, a professor at the University of Birmingham who has a Ph.D. in civil engineering.


Read More: Here’s How Earthquakes Cause Tsunamis


The Limitations of Seismographs

While these measurements can help triangulate the quake’s epicenter, they have significant limitations. “This gives some warning but means [seismographs] can’t produce decisive event warnings because they only see something when the ground starts to shake,” Boddice adds.

In other words, the seismographs can only measure waves as the earthquake happens, forcing scientists into a race against time to warn at-risk areas before it is too late.

With quantum sensors, boosting the sensitivity to gravitational waves can result in more data before a quake, giving valuable time for issuing a warning.

Quantum Entanglement

Combining groups of atoms and a web of lasers, scientists can monitor the fluctuations in individual atoms within this web, culminating in large amounts of accurate data.

Scientists are working on adding quantum entanglement to these sensors, where two particles within the apparatus would be entangled or have interdependent quantum states. When this happens, the atoms are less susceptible to environmental noise, giving more precise readings of gravitational wave fluctuations.

Although the entangled quantum sensors may not be plagued by the problems of traditional seismographs, such as signal jamming, they have their issues, mainly the flimsiness of the entire system.

Quantum entanglement is incredibly fragile and can break quickly. That makes the implementation and maintenance of such a system difficult and costly. But research is underway to make these systems more robust, especially for creating other devices like quantum computers.

Improving Earthquake Detection

Boddice is one of the many researchers looking into leveraging these quantum devices for an improved earthquake detection system.

“By adding a network of permanently monitoring gravimeter sensors, if you detected a mass shift caused by the plate movement on multiple detectors simultaneously, you’d have an earlier warning,” Boddice says. That could then be confirmed once the accelerometers started to detect wave arrivals as the gravitational signal travels at light speed.

Combining quantum sensors with traditional seismographs could provide more precise data for researchers to use in earthquake models, leading to better hot-spot predictions and more effective warning systems.

Benefits of Quantum Sensing

The sensitivity gained through this sort of quantum sensing “has the potential of saving thousands of lives by providing the critical extra seconds needed to reach safer locations at the onset of an earthquake,” says Anjul Loiacono, vice president of Quantum Signal Processing at Infleqtion (formerly ColdQuanta), a quantum computing company developing quantum sensors for gravitational wave detection.

Though the warning window may be expanded only slightly, Boddice believes that this extra time can still make a difference in reducing fatalities during an earthquake.

“For example, if a rail under a train buckles while the train is moving, it will crash; so it’s better to stop the train to avoid that impact,” he says. “You can take action to avoid people getting into riskier places.”

This might apply to the operation of elevators, or closing entrances to tunnels where people may get trapped when an earthquake starts. You could also shut off power and gas networks to avoid fires in the event of a rupture during a tremor.

“Individually these actions might seem like tiny things, but cumulatively for a big enough earthquake, they might make a significant difference to casualties,” Boddice adds.

Limitations of Earthquake Predictions

While these devices can improve the warning time, they also may be too sensitive, which poses other challenges.

“The high sensitivity of the quantum gravimeters is both a blessing and a curse,” Boddice says.

That’s because all sorts of forces, such as vehicle traffic, send vibrations through the Earth that might register in these sensitive devices. This sparks the job of discerning the background noise from important but small gravity signals.

As a result, some scientists are turning to machine-learning algorithms. This technology can help interpret what is noise and what is an earthquake.

“Combining the power of machine learning with quantum gravitational sensing technology will lead to faster and more precise detection of imminent earthquakes,” Loiacono says.

Machine-learning algorithms can also help predict trends for future earthquake activity.

Are Earthquakes More Common Today?

Current data shows a significant spike in the number of major earthquakes within the past two years, and experts predict that this trend will continue.

However, that increasing trend is largely due to the fact that detection abilities have increased, as well as reporting vigilance and the impact of earthquakes in more populated and developed areas.

“There’s a long history of detecting earthquakes, and they aren’t any more common now than in the past,” Boddice says. “I suspect a combination of more populated areas increases the chances of them being noticed or more reporting on them due to 24-hour rolling news.”

He suggests that an objective, comprehensive and long-term look at the trend would actually reveal they are no more common now than at any other point in time.


Read More: How Long Will Fukushima Stay Radioactive?


via Discover Main Feed https://ift.tt/jNMKzTF

March 21, 2023 at 05:13PM

Google Rolls Out Its Bard Chatbot to Battle ChatGPT

https://www.wired.com/story/google-bard-chatbot-rolls-out-to-battle-chatgpt/


Google isn’t used to playing catch-up in either artificial intelligence or search, but today the company is hustling to show that it hasn’t lost its edge. It’s starting the rollout of a chatbot called Bard to do battle with the sensationally popular ChatGPT

Bard, like ChatGPT, will respond to questions about and discuss an almost inexhaustible range of subjects with what sometimes seems like humanlike understanding. Google showed WIRED several examples, including asking for activities for a child who is interested in bowling and requesting 20 books to read this year. 

Bard is also like ChatGPT in that it will sometimes make things up and act weird. Google disclosed an example of it misstating the name of a plant suggested for growing indoors. “Bard’s an early experiment, it’s not perfect, and it’s gonna get things wrong occasionally,” says Eli Collins, a vice president of research at Google working on Bard.

Google says it has made Bard available to a small number of testers. From today anyone in the US and the UK will be able to apply for access

The bot will be accessible via its own web page and separate from Google’s regular search interface. It will offer three answers to each query—a design choice meant to impress upon users that Bard is generating answers on the fly and may sometimes make mistakes.

Google will also offer a recommended query for a conventional web search beneath each Bard response. And it will be possible for users to give feedback on its answers to help Google refine the bot by clicking a thumbs-up or thumbs-down, with the option to type in more detailed feedback.

Google says early users of Bard have found it a useful aid for generating ideas or text. Collins also acknowledges that some have successfully got it to misbehave, although he did not specify how or exactly what restrictions Google has tried to place on the bot.

Bard and ChatGPT show enormous potential and flexibility but are also unpredictable and still at an early stage of development. That presents a conundrum for companies hoping to gain an edge in advancing and harnessing the technology. For a company like Google with large established products, the challenge is particularly difficult.

Both the chatbots use powerful AI models that predict the words that should follow a given sentence based on statistical patterns gleaned from enormous amounts of text training data. This turns out to be an incredibly effective way of mimicking human responses to questions, but it means that the algorithms will sometimes make up, or “hallucinate,” facts—a serious problem when a bot is supposed to be helping users find information or search the web. 

ChatGPT-style bots can also regurgitate biases or language found in the darker corners of their training data, for example around race, gender, and age. They also tend to reflect back the way a user addresses them, causing them to readily act as if they have emotions and to be vulnerable to being nudged into saying strange and inappropriate things.

via Wired Top Stories https://www.wired.com

March 21, 2023 at 09:09AM

Chinese Dating App Does the Swiping for Singles to Find Love

https://gizmodo.com/china-dating-app-palm-guixi-1850245279


China’s new state-sponsored dating app, Palm Guixi, is something right out of the dystopia fiction handbook and is receiving mixed responses. The app was reportedly created to streamline the dating process for residents in Jiangxi by matching single users based on background data uploaded by the app itself.

Palm Guixi works to create what it thinks will be an appropriate match for singles looking for love and unlike Tinder, Bumble, Hinge, or any of the other swipe-right dating apps, Palm Guixi uses their background data to pick suitors for the user. According to China Youth Daily, the platform also works to organize blind dates once a match is approved, The Guardian reported.

The Chinese government reportedly launched the app in an effort to boost the marriage rate which has steadily declined over the past decade. A report by China’s Ministry of Civil Affairs found that not only are fewer residents getting married but of those that are, roughly half were 30 years old and above. China reached its peak marriage rate in 2011, with 9.7 million registered marriages which, in 2021, plummeted to an all-time low of 7.6 million, Fortune reported back in 2022.

This significant drop has been welcomed by some younger people in China, who say the government’s hardened stance on divorce has deterred many of them from pursuing marriage, according recent reporting by the South China Morning Post. China introduced a new law requiring a 30-day “cooling off” period even if a couple mutually agrees to divorce. If, at the end of the 30 days, the couple still wants to divorce they are required to reapply for the split, but lawyers say the outcome of having the divorce approved can be unpredictable.

Chinese residents took to Weibo to support the decreasing marriage rate, with one writing, “Marriage is like a gamble. The problem is that ordinary people can’t afford to lose, so I choose not to take part,” the outlet reported.

China’s push to engage young people in dating also comes as its population fell to its lowest level last year with only a record 6.77 births per 1,000 people in China last year. Some Jiangxi residents are now pushing back, saying the government introduced the app because it only wants to reverse the falling birth rate.

The Guardian reported commenters on Weibo are now speaking out against Palm Guixi, with one user saying the Chinese government expects its people to “breed like pigs.”

via Gizmodo https://gizmodo.com

March 21, 2023 at 07:14AM

Is ChatGPT Closer to a Human Librarian Than It Is to Google?

https://gizmodo.com/chatgpt-ai-openai-like-a-librarian-search-google-1850238908


Illustration: Phonlamai Photo (Shutterstock)

The prominent model of information access and retrieval before search engines became the norm – librarians and subject or search experts providing relevant information – was interactive, personalized, transparent and authoritative. Search engines are the primary way most people access information today, but entering a few keywords and getting a list of results ranked by some unknown function is not ideal.

A new generation of artificial intelligence-based information access systems, which includes Microsoft’s Bing/ChatGPT, Google/Bard and Meta/LLaMA, is upending the traditional search engine mode of search input and output. These systems are able to take full sentences and even paragraphs as input and generate personalized natural language responses.

At first glance, this might seem like the best of both worlds: personable and custom answers combined with the breadth and depth of knowledge on the internet. But as a researcher who studies the search and recommendation systems, I believe the picture is mixed at best.

AI systems like ChatGPT and Bard are built on large language models. A language model is a machine-learning technique that uses a large body of available texts, such as Wikipedia and PubMed articles, to learn patterns. In simple terms, these models figure out what word is likely to come next, given a set of words or a phrase. In doing so, they are able to generate sentences, paragraphs and even pages that correspond to a query from a user. On March 14, 2023, OpenAI announced the next generation of the technology, GPT-4, which works with both text and image input, and Microsoft announced that its conversational Bing is based on GPT-4.

G/O Media may get a commission

35% off

Samsung Q70A QLED 4K TV

Save big with this Samsung sale
If you’re ready to drop some cash on a TV, now’s a great time to do it. You can score the 75-inch Samsung Q70A QLED 4K TV for a whopping $800 off. That knocks the price down to $1,500 from $2,300, which is 35% off. This is a lot of TV for the money, and it also happens to be one of the best 4K TVs you can buy right now, according to Gizmodo.

‘60 Minutes’ looked at the good and the bad of ChatGPT.

Thanks to the training on large bodies of text, fine-tuning and other machine learning-based methods, this type of information retrieval technique works quite effectively. The large language model-based systems generate personalized responses to fulfill information queries. People have found the results so impressive that ChatGPT reached 100 million users in one third of the time it took TikTok to get to that milestone. People have used it to not only find answers but to generate diagnoses, create dieting plans and make investment recommendations.

ChatGPT’s Opacity and AI ‘hallucinations’

However, there are plenty of downsides. First, consider what is at the heart of a large language model – a mechanism through which it connects the words and presumably their meanings. This produces an output that often seems like an intelligent response, but large language model systems are known to produce almost parroted statements without a real understanding. So, while the generated output from such systems might seem smart, it is merely a reflection of underlying patterns of words the AI has found in an appropriate context.

This limitation makes large language model systems susceptible to making up or “hallucinating” answers. The systems are also not smart enough to understand the incorrect premise of a question and answer faulty questions anyway. For example, when asked which U.S. president’s face is on the $100 bill, ChatGPT answers Benjamin Franklin without realizing that Franklin was never president and that the premise that the $100 bill has a picture of a U.S. president is incorrect.

The problem is that even when these systems are wrong only 10% of the time, you don’t know which 10%. People also don’t have the ability to quickly validate the systems’ responses. That’s because these systems lack transparency – they don’t reveal what data they are trained on, what sources they have used to come up with answers or how those responses are generated.

For example, you could ask ChatGPT to write a technical report with citations. But often it makes up these citations – “hallucinating” the titles of scholarly papers as well as the authors. The systems also don’t validate the accuracy of their responses. This leaves the validation up to the user, and users may not have the motivation or skills to do so or even recognize the need to check an AI’s responses. ChatGPT doesn’t know when a question doesn’t make sense, because it doesn’t know any facts.

AI stealing content – and traffic

While lack of transparency can be harmful to the users, it is also unfair to the authors, artists and creators of the original content from whom the systems have learned, because the systems do not reveal their sources or provide sufficient attribution. In most cases, creators are not compensated or credited or given the opportunity to give their consent.

There is an economic angle to this as well. In a typical search engine environment, the results are shown with the links to the sources. This not only allows the user to verify the answers and provides the attributions to those sources, it also generates traffic for those sites. Many of these sources rely on this traffic for their revenue. Because the large language model systems produce direct answers but not the sources they drew from, I believe that those sites are likely to see their revenue streams diminish.

Large language models can take away learning and serendipity

Finally, this new way of accessing information also can disempower people and takes away their chance to learn. A typical search process allows users to explore the range of possibilities for their information needs, often triggering them to adjust what they’re looking for. It also affords them an opportunity to learn what is out there and how various pieces of information connect to accomplish their tasks. And it allows for accidental encounters or serendipity.

These are very important aspects of search, but when a system produces the results without showing its sources or guiding the user through a process, it robs them of these possibilities.

Large language models are a great leap forward for information access, providing people with a way to have natural language-based interactions, produce personalized responses and discover answers and patterns that are often difficult for an average user to come up with. But they have severe limitations due to the way they learn and construct responses. Their answers may be wrong, toxic or biased.

While other information access systems can suffer from these issues, too, large language model AI systems also lack transparency. Worse, their natural language responses can help fuel a false sense of trust and authoritativeness that can be dangerous for uninformed users.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.


Chirag Shah, Professor of Information Science, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

via Gizmodo https://gizmodo.com

March 19, 2023 at 07:07AM

OpenAI Says Stonemasons Should Be Fine in the Brave New World

https://gizmodo.com/openai-ai-chatbot-coding-writers-jobs-penn-1850244785


A new paper released Monday says that 80% of the U.S. workforce will see the impact of large language models on their work. While some will only experience a moderate amount of impact on their day to day workload, close to 20% of those working today will likely find about half of their tasks automated to some extent by AI.

The paper published by Cornell University was led by several OpenAI researchers working alongside a researcher at nonprofit lab OpenResearch, which is chaired by OpenAI CEO Sam Altman, and a professor at the University of Pennsylvania. With the release of OpenAI’s latest version of its LLM, GPT-4, the company is already promoting how it can score well on tests like the Biology Olympiad, but this report also analyzes likely applications for AI that are capable with current LLM models. AI already has text and code-generation capabilities (even if AI developers are still trying to convince people to not trust content their AI creates), as well as the routinely discussed implications for art, speech, and video.

All in all, the paper strays from making any declarative statements about job impacts. It instead analyzes jobs that are more likely to have some “exposure” to AI generation, meaning it will take 50% less time to complete a job’s common task. All in all, most high-paying white collar work will find AI pushing into their fields. Those in science or “critical thinking” fields, as the paper calls it, will have less exposure, pointing to the modern AI’s utter limitations in creating novel content. Programmers and writers, on the other hand, are likely to see quite a lot of exposure.

Though the paper describes this “exposure” to AI without identifying if it has any real labor-displacing effects, it’s not easy to identify the companies who are already looking at AI as a way to reduce labor costs. CNET’s parent company Red Ventures has been held under the microscope for its use of AI-written articles, after it was discovered how inaccurate the articles often were. Earlier this month, CNET laid off around a dozen staffers. According to The Verge, the editor in chief Connie Guglielmo became the company’s senior VP of AI content strategy.

Of course, there’s a difference between white collar workers playing around with ChatGPT and the workplace demanding workers use a chatbot to automate their work. An IBM and Morning Consult survey from last year said 66% of companies worldwide are either using or exploring AI, and that was before the AI hype train took on more coal at the end of last year with ChatGPT. The survey said 42% of that adoption was driven by the need to reduce costs and “automate key processes.” You can very well argue that programmers and writers are often engaging in that previously mentioned “critical thinking” as much as any other subject, but will some managers and company owners think the same way if they’re told AI can help them reduce head counts?

And of course, your average blue collar job won’t see any real impact. The paper makes special mention of a few of these jobs, making oddly specific mention of derrick operators, cutlers, pile driver operators, and stonemasons. Those without higher education degrees will experience less impact by AI, and some of these unaffected jobs, like short order cooks or dishwashers, are already far on the lower end of the pay scale. It’s still unlikely that these jobs will be re-evaluated to finally think about offering these workers a living wage, even if other jobs may suffer.

And as these models improve over time, the effect will only grow. The U.S. Chamber of Commerce has mentioned some more light-touch regulation, but the President Joe Biden administration’s own Blueprint for an AI Bill of Rights mentions people should be able to “opt out” of AI systems in favor of human alternatives. The bill of rights does not really mention any ways to mitigate the impacts AI could have on the labor market. The report notes the “difficulty” of regulating AI because of its constant shifts, but it shies away from any noted general thoughts lawmakers could follow.

Sure, the paper doesn’t analyze the likelihood of a job being substituted by AI, but it doesn’t take much to get there. The paper describes most jobs by their simple “tasks” rather than their application, which is a problem when you’re trying to discuss whether an AI can perform at the same level as a human. Simply put, at this point, AI-generated content is nowhere near surpassing the quality or originality as human-created content. 

A report from the US-EU Trade and Technology Council published by the White House last December mentions AI can potentially “expos[e] large new swaths of the workforce to potential disruption.” The report mentioned while past automation impacted “routine” tasks, AI can impact “nonroutine” tasks. Now it’s up to employers just how much “nonroutine” tasks they think they still need actual humans to perform.

via Gizmodo https://gizmodo.com

March 20, 2023 at 03:29PM

Scratched EV battery? Your insurer may have to junk the whole car

https://www.autoblog.com/2023/03/20/ev-battery-packs-insurers-junk-entire-car/


Damaged electric vehicles that have been written off by insurers are lined up at UK salvage company Synetiq’s yard in Doncaster, Britain. (Reuters)

 

LONDON/DETROIT — For many electric vehicles, there is no way to repair or assess even slightly damaged battery packs after accidents, forcing insurance companies to write off cars with few miles — leading to higher premiums and undercutting gains from going electric. 

And now those battery packs are piling up in scrapyards in some countries, a previously unreported and expensive gap in what was supposed to be a “circular economy.” 

“We’re buying electric cars for sustainability reasons,” said Matthew Avery, research director at automotive risk intelligence company Thatcham Research. “But an EV isn’t very sustainable if you’ve got to throw the battery away after a minor collision.” 

Battery packs can cost tens of thousands of dollars and represent up to 50% of an EV’s price tag, often making it uneconomical to replace them. 

While some automakers such as Ford and General Motors said they have made battery packs easier to repair, Tesla has taken the opposite tack with its Texas-built Model Y, whose new structural battery pack has been described by experts as having “zero repairability.” 

Tesla did not respond to a request for comment. 

A Reuters search of EV salvage sales in the U.S. and Europe shows a large portion of low-mileage Teslas, but also models from Nissan, Hyundai, Stellantis, BMW, Renault and others. 

 

“A Tesla structural battery pack is going straight to the grinder,” Sandy Munro said. 

 

EVs constitute only a fraction of vehicles on the road, making industry-wide data hard to come by, but the trend of low-mileage zero-emission cars being written off with minor damage is growing. Tesla’s decision to make battery packs “structural” – part of the car’s body – has allowed it to cut production costs but risks pushing those costs back to consumers and insurers. 

Tesla has not referred to any problems with insurers writing off its vehicles. But in January CEO Elon Musk said premiums from third-party insurance companies “in some cases were unreasonably high.” 

Unless Tesla and other carmakers produce more easily repairable battery packs and provide third-party access to battery cell data, already-high insurance premiums will keep rising as EV sales grow and more low-mileage cars get scrapped after collisions, insurers and industry experts said. 

“The number of cases is going to increase, so the handling of batteries is a crucial point,” said Christoph Lauterwasser, managing director of the Allianz Center for Technology, a research institute owned by Allianz. 

Lauterwasser noted EV battery production emits far more CO2 than fossil-fuel models, meaning EVs must be driven for thousands of miles before they offset those extra emissions

“If you throw away the vehicle at an early stage, you’ve lost pretty much all advantage in terms of CO2 emissions,” he said. 

Most carmakers said their battery packs are repairable, though few seem willing to share access to battery data. Insurers, leasing companies and car repair shops are already fighting with carmakers in the EU over access to lucrative connected-car data. 

Lauterwasser said access to EV battery data is part of that fight. Allianz has seen scratched battery packs where the cells inside are likely undamaged, but without diagnostic data it has to write off those vehicles. 

Ford and GM tout their newer, more repairable packs. But the new, large 4680 cells in the Model Y made at Tesla’s Austin, Texas, plant, are glued into a pack that forms part of the car’s structure and cannot be easily removed or replaced, experts said. 

In January, Tesla’s Musk said the carmaker has been making design and software changes to its vehicles to lower repair costs and insurance premiums. 

The company also offers its own insurance product in a dozen U.S. states to Tesla owners at lower rates. 

Insurers and industry experts also note that EVs, because they are loaded with all the latest safety features, so far have had fewer accidents than traditional cars. 

‘Straight to the grinder’ 

Sandy Munro, head of Michigan-based Munro & Associates, which tears down vehicles and advises automakers on how to improve them, said the Model Y battery pack has “zero repairability.” 

“A Tesla structural battery pack is going straight to the grinder,” Munro said. 

EV battery problems also expose a hole in the green “circular economy” touted by carmakers. 

At Synetiq, the UK’s largest salvage company, head of operations Michael Hill said over the last 12 months the number of EVs in the isolation bay – where they must be checked to avoid fire risk – at the firm’s Doncaster yard has soared, from perhaps a dozen every three days to up to 20 per day. 

“We’ve seen a really big shift and it’s across all manufacturers,” Hill said. 

The UK currently has no EV battery recycling facilities, so Synetiq has to remove the batteries from written-off cars and store them in containers. Hill estimated at least 95% of the cells in the hundreds of EV battery packs – and thousands of hybrid battery packs – Synetiq has stored at Doncaster are undamaged and should be reused. 

It already costs more to insure most EVs than traditional cars. 

According to online brokerage Policygenius, the average U.S. monthly EV insurance payment in 2023 is $206, 27% more than for a combustion-engine model. 

According to Bankrate, an online publisher of financial content, U.S. insurers know that “if even a minor accident results in damage to the battery pack … the cost to replace this key component may exceed $15,000.” 

A replacement battery for a Tesla Model 3 can cost up to $20,000, for a vehicle that retails at around $43,000 but depreciates quickly over time. 

Andy Keane, UK commercial motor product manager at French insurer AXA, said expensive replacement batteries “may sometimes make replacing a battery unfeasible.” 

There are a growing number of repair shops specializing in repairing EVs and replacing batteries. In Phoenix, Arizona, Gruber Motor Co has mostly focused on replacing batteries in older Tesla models. 

But insurers cannot access Tesla’s battery data, so they have taken a cautious approach, owner Peter Gruber said. 

“An insurance company is not going to take that risk because they’re facing a lawsuit later on if something happens with that vehicle and they did not total it,” he said. 

‘Pain points’ 

The British government is funding research into EV insurance “pain points” led by Thatcham, Synetiq and insurer LV=. 

Recently adopted EU battery regulations do not specifically address battery repairs, but they did ask the European Commission to encourage standards to “facilitate maintenance, repair and repurposing,” a commission source said. 

Insurers said they know how to fix the problem — make batteries in smaller sections, or modules, that are simpler to fix, and open diagnostics data to third parties to determine battery cell health. 

Individual U.S. insurers declined to comment. 

But Tony Cotto, director of auto and underwriting policy at the National Association of Mutual Insurance Companies, said “consumer access to vehicle-generated data will further enhance driver safety and policyholders’ satisfaction … by facilitating the entire repair process.” 

Lack of access to critical diagnostic data was raised in mid-March in a class action filed against Tesla in U.S. District Court in California. 

Insurers said failure to act will cost consumers. 

EV battery damage makes up just a few percent of Allianz’s motor insurance claims, but 8% of claims costs in Germany, Lauterwasser said. Germany’s insurers pool data on vehicle claims data and adjust premium rates annually. 

“If the cost for a certain model gets higher it will raise premium levels because the rating goes up,” Lauterwasser said. 

(Reporting by Nick Carey and Sarah McFarlane in London, Paul Lienert in Detroit, Gilles Guillaume in Paris and Giulio Piovaccari in MilanAdditional reporting by Victoria Waldersee in BerlinEditing by Ben Klayman and Matthew Lewis) 

via Autoblog https://ift.tt/J9WV1t3

March 20, 2023 at 11:34AM

ChatGPT Can Turn Pokemon Emerald Into A Text-Based Adventure Game

https://www.gamespot.com/articles/chatgpt-can-turn-pokemon-emerald-into-a-text-based-adventure-game/1100-6512442/


If you thought Pokemon would be fun as a text adventure game, someone has used ChatGPT-4 to turn Pokemon Emerald into one.

As spotted by Polygon, Twitter user Dan Dangond has recently been using the newest version of the AI language model to turn Pokemon Emerald into a text-based adventure game. Dangond shared the discovery on Twitter, noting how you can just ask the software to play the game, and in a thread, observed how well it worked–and sometimes, how well it didn’t work.

The first tweet starting the Pokemon journey is almost like a speedrun, putting you right into the action where Emerald’s Professor Birch is being chased by a Poocheyena, prompting you to choose from the three starters. Dangon couldn’t have made a wrong choice with which starter, but did opt for Mudkip, and battle options came out as numbers to choose from.

It all looks like it works pretty smoothly, Dangond even tested out using Water Gun, but ChatGPT-4 knew it is a move Mudkip can’t learn until level 10, so didn’t use it. Interestingly, for those that hate the grind, you can just ask it to do a training montage, boosting Mudkip to level 8.

Later on, Dangond asked the software to head to a particular route and just simulate what would happen there, leading his Mudkip to become level 10, and automatically catching a Ralts, which does make catching ’em all a lot quicker, but removes a lot of the challenge. There were some other problems along the way, like the software not knowing the position of certain routes and towns. At one point it also didn’t know that Nincada is Bug/Ground type, and didn’t account for extra damage when Mudkip used Water Gun, but it did take these things into account when it was corrected.

ChatGPT is being used in a variety of ways at a point in time where AI-driven software is becoming more popular, and more powerful. Recently it was used to solve a Fortnite mystery, but ultimately its use in games won’t produce anything like a GTA-killer any time soon.

via GameSpot’s PC Reviews https://ift.tt/nmqCz4U

March 17, 2023 at 10:18AM