Is ChatGPT Closer to a Human Librarian Than It Is to Google?

https://gizmodo.com/chatgpt-ai-openai-like-a-librarian-search-google-1850238908


Illustration: Phonlamai Photo (Shutterstock)

The prominent model of information access and retrieval before search engines became the norm – librarians and subject or search experts providing relevant information – was interactive, personalized, transparent and authoritative. Search engines are the primary way most people access information today, but entering a few keywords and getting a list of results ranked by some unknown function is not ideal.

A new generation of artificial intelligence-based information access systems, which includes Microsoft’s Bing/ChatGPT, Google/Bard and Meta/LLaMA, is upending the traditional search engine mode of search input and output. These systems are able to take full sentences and even paragraphs as input and generate personalized natural language responses.

At first glance, this might seem like the best of both worlds: personable and custom answers combined with the breadth and depth of knowledge on the internet. But as a researcher who studies the search and recommendation systems, I believe the picture is mixed at best.

AI systems like ChatGPT and Bard are built on large language models. A language model is a machine-learning technique that uses a large body of available texts, such as Wikipedia and PubMed articles, to learn patterns. In simple terms, these models figure out what word is likely to come next, given a set of words or a phrase. In doing so, they are able to generate sentences, paragraphs and even pages that correspond to a query from a user. On March 14, 2023, OpenAI announced the next generation of the technology, GPT-4, which works with both text and image input, and Microsoft announced that its conversational Bing is based on GPT-4.

G/O Media may get a commission

35% off

Samsung Q70A QLED 4K TV

Save big with this Samsung sale
If you’re ready to drop some cash on a TV, now’s a great time to do it. You can score the 75-inch Samsung Q70A QLED 4K TV for a whopping $800 off. That knocks the price down to $1,500 from $2,300, which is 35% off. This is a lot of TV for the money, and it also happens to be one of the best 4K TVs you can buy right now, according to Gizmodo.

‘60 Minutes’ looked at the good and the bad of ChatGPT.

Thanks to the training on large bodies of text, fine-tuning and other machine learning-based methods, this type of information retrieval technique works quite effectively. The large language model-based systems generate personalized responses to fulfill information queries. People have found the results so impressive that ChatGPT reached 100 million users in one third of the time it took TikTok to get to that milestone. People have used it to not only find answers but to generate diagnoses, create dieting plans and make investment recommendations.

ChatGPT’s Opacity and AI ‘hallucinations’

However, there are plenty of downsides. First, consider what is at the heart of a large language model – a mechanism through which it connects the words and presumably their meanings. This produces an output that often seems like an intelligent response, but large language model systems are known to produce almost parroted statements without a real understanding. So, while the generated output from such systems might seem smart, it is merely a reflection of underlying patterns of words the AI has found in an appropriate context.

This limitation makes large language model systems susceptible to making up or “hallucinating” answers. The systems are also not smart enough to understand the incorrect premise of a question and answer faulty questions anyway. For example, when asked which U.S. president’s face is on the $100 bill, ChatGPT answers Benjamin Franklin without realizing that Franklin was never president and that the premise that the $100 bill has a picture of a U.S. president is incorrect.

The problem is that even when these systems are wrong only 10% of the time, you don’t know which 10%. People also don’t have the ability to quickly validate the systems’ responses. That’s because these systems lack transparency – they don’t reveal what data they are trained on, what sources they have used to come up with answers or how those responses are generated.

For example, you could ask ChatGPT to write a technical report with citations. But often it makes up these citations – “hallucinating” the titles of scholarly papers as well as the authors. The systems also don’t validate the accuracy of their responses. This leaves the validation up to the user, and users may not have the motivation or skills to do so or even recognize the need to check an AI’s responses. ChatGPT doesn’t know when a question doesn’t make sense, because it doesn’t know any facts.

AI stealing content – and traffic

While lack of transparency can be harmful to the users, it is also unfair to the authors, artists and creators of the original content from whom the systems have learned, because the systems do not reveal their sources or provide sufficient attribution. In most cases, creators are not compensated or credited or given the opportunity to give their consent.

There is an economic angle to this as well. In a typical search engine environment, the results are shown with the links to the sources. This not only allows the user to verify the answers and provides the attributions to those sources, it also generates traffic for those sites. Many of these sources rely on this traffic for their revenue. Because the large language model systems produce direct answers but not the sources they drew from, I believe that those sites are likely to see their revenue streams diminish.

Large language models can take away learning and serendipity

Finally, this new way of accessing information also can disempower people and takes away their chance to learn. A typical search process allows users to explore the range of possibilities for their information needs, often triggering them to adjust what they’re looking for. It also affords them an opportunity to learn what is out there and how various pieces of information connect to accomplish their tasks. And it allows for accidental encounters or serendipity.

These are very important aspects of search, but when a system produces the results without showing its sources or guiding the user through a process, it robs them of these possibilities.

Large language models are a great leap forward for information access, providing people with a way to have natural language-based interactions, produce personalized responses and discover answers and patterns that are often difficult for an average user to come up with. But they have severe limitations due to the way they learn and construct responses. Their answers may be wrong, toxic or biased.

While other information access systems can suffer from these issues, too, large language model AI systems also lack transparency. Worse, their natural language responses can help fuel a false sense of trust and authoritativeness that can be dangerous for uninformed users.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.


Chirag Shah, Professor of Information Science, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

via Gizmodo https://gizmodo.com

March 19, 2023 at 07:07AM

OpenAI Says Stonemasons Should Be Fine in the Brave New World

https://gizmodo.com/openai-ai-chatbot-coding-writers-jobs-penn-1850244785


A new paper released Monday says that 80% of the U.S. workforce will see the impact of large language models on their work. While some will only experience a moderate amount of impact on their day to day workload, close to 20% of those working today will likely find about half of their tasks automated to some extent by AI.

The paper published by Cornell University was led by several OpenAI researchers working alongside a researcher at nonprofit lab OpenResearch, which is chaired by OpenAI CEO Sam Altman, and a professor at the University of Pennsylvania. With the release of OpenAI’s latest version of its LLM, GPT-4, the company is already promoting how it can score well on tests like the Biology Olympiad, but this report also analyzes likely applications for AI that are capable with current LLM models. AI already has text and code-generation capabilities (even if AI developers are still trying to convince people to not trust content their AI creates), as well as the routinely discussed implications for art, speech, and video.

All in all, the paper strays from making any declarative statements about job impacts. It instead analyzes jobs that are more likely to have some “exposure” to AI generation, meaning it will take 50% less time to complete a job’s common task. All in all, most high-paying white collar work will find AI pushing into their fields. Those in science or “critical thinking” fields, as the paper calls it, will have less exposure, pointing to the modern AI’s utter limitations in creating novel content. Programmers and writers, on the other hand, are likely to see quite a lot of exposure.

Though the paper describes this “exposure” to AI without identifying if it has any real labor-displacing effects, it’s not easy to identify the companies who are already looking at AI as a way to reduce labor costs. CNET’s parent company Red Ventures has been held under the microscope for its use of AI-written articles, after it was discovered how inaccurate the articles often were. Earlier this month, CNET laid off around a dozen staffers. According to The Verge, the editor in chief Connie Guglielmo became the company’s senior VP of AI content strategy.

Of course, there’s a difference between white collar workers playing around with ChatGPT and the workplace demanding workers use a chatbot to automate their work. An IBM and Morning Consult survey from last year said 66% of companies worldwide are either using or exploring AI, and that was before the AI hype train took on more coal at the end of last year with ChatGPT. The survey said 42% of that adoption was driven by the need to reduce costs and “automate key processes.” You can very well argue that programmers and writers are often engaging in that previously mentioned “critical thinking” as much as any other subject, but will some managers and company owners think the same way if they’re told AI can help them reduce head counts?

And of course, your average blue collar job won’t see any real impact. The paper makes special mention of a few of these jobs, making oddly specific mention of derrick operators, cutlers, pile driver operators, and stonemasons. Those without higher education degrees will experience less impact by AI, and some of these unaffected jobs, like short order cooks or dishwashers, are already far on the lower end of the pay scale. It’s still unlikely that these jobs will be re-evaluated to finally think about offering these workers a living wage, even if other jobs may suffer.

And as these models improve over time, the effect will only grow. The U.S. Chamber of Commerce has mentioned some more light-touch regulation, but the President Joe Biden administration’s own Blueprint for an AI Bill of Rights mentions people should be able to “opt out” of AI systems in favor of human alternatives. The bill of rights does not really mention any ways to mitigate the impacts AI could have on the labor market. The report notes the “difficulty” of regulating AI because of its constant shifts, but it shies away from any noted general thoughts lawmakers could follow.

Sure, the paper doesn’t analyze the likelihood of a job being substituted by AI, but it doesn’t take much to get there. The paper describes most jobs by their simple “tasks” rather than their application, which is a problem when you’re trying to discuss whether an AI can perform at the same level as a human. Simply put, at this point, AI-generated content is nowhere near surpassing the quality or originality as human-created content. 

A report from the US-EU Trade and Technology Council published by the White House last December mentions AI can potentially “expos[e] large new swaths of the workforce to potential disruption.” The report mentioned while past automation impacted “routine” tasks, AI can impact “nonroutine” tasks. Now it’s up to employers just how much “nonroutine” tasks they think they still need actual humans to perform.

via Gizmodo https://gizmodo.com

March 20, 2023 at 03:29PM

Scratched EV battery? Your insurer may have to junk the whole car

https://www.autoblog.com/2023/03/20/ev-battery-packs-insurers-junk-entire-car/


Damaged electric vehicles that have been written off by insurers are lined up at UK salvage company Synetiq’s yard in Doncaster, Britain. (Reuters)

 

LONDON/DETROIT — For many electric vehicles, there is no way to repair or assess even slightly damaged battery packs after accidents, forcing insurance companies to write off cars with few miles — leading to higher premiums and undercutting gains from going electric. 

And now those battery packs are piling up in scrapyards in some countries, a previously unreported and expensive gap in what was supposed to be a “circular economy.” 

“We’re buying electric cars for sustainability reasons,” said Matthew Avery, research director at automotive risk intelligence company Thatcham Research. “But an EV isn’t very sustainable if you’ve got to throw the battery away after a minor collision.” 

Battery packs can cost tens of thousands of dollars and represent up to 50% of an EV’s price tag, often making it uneconomical to replace them. 

While some automakers such as Ford and General Motors said they have made battery packs easier to repair, Tesla has taken the opposite tack with its Texas-built Model Y, whose new structural battery pack has been described by experts as having “zero repairability.” 

Tesla did not respond to a request for comment. 

A Reuters search of EV salvage sales in the U.S. and Europe shows a large portion of low-mileage Teslas, but also models from Nissan, Hyundai, Stellantis, BMW, Renault and others. 

 

“A Tesla structural battery pack is going straight to the grinder,” Sandy Munro said. 

 

EVs constitute only a fraction of vehicles on the road, making industry-wide data hard to come by, but the trend of low-mileage zero-emission cars being written off with minor damage is growing. Tesla’s decision to make battery packs “structural” – part of the car’s body – has allowed it to cut production costs but risks pushing those costs back to consumers and insurers. 

Tesla has not referred to any problems with insurers writing off its vehicles. But in January CEO Elon Musk said premiums from third-party insurance companies “in some cases were unreasonably high.” 

Unless Tesla and other carmakers produce more easily repairable battery packs and provide third-party access to battery cell data, already-high insurance premiums will keep rising as EV sales grow and more low-mileage cars get scrapped after collisions, insurers and industry experts said. 

“The number of cases is going to increase, so the handling of batteries is a crucial point,” said Christoph Lauterwasser, managing director of the Allianz Center for Technology, a research institute owned by Allianz. 

Lauterwasser noted EV battery production emits far more CO2 than fossil-fuel models, meaning EVs must be driven for thousands of miles before they offset those extra emissions

“If you throw away the vehicle at an early stage, you’ve lost pretty much all advantage in terms of CO2 emissions,” he said. 

Most carmakers said their battery packs are repairable, though few seem willing to share access to battery data. Insurers, leasing companies and car repair shops are already fighting with carmakers in the EU over access to lucrative connected-car data. 

Lauterwasser said access to EV battery data is part of that fight. Allianz has seen scratched battery packs where the cells inside are likely undamaged, but without diagnostic data it has to write off those vehicles. 

Ford and GM tout their newer, more repairable packs. But the new, large 4680 cells in the Model Y made at Tesla’s Austin, Texas, plant, are glued into a pack that forms part of the car’s structure and cannot be easily removed or replaced, experts said. 

In January, Tesla’s Musk said the carmaker has been making design and software changes to its vehicles to lower repair costs and insurance premiums. 

The company also offers its own insurance product in a dozen U.S. states to Tesla owners at lower rates. 

Insurers and industry experts also note that EVs, because they are loaded with all the latest safety features, so far have had fewer accidents than traditional cars. 

‘Straight to the grinder’ 

Sandy Munro, head of Michigan-based Munro & Associates, which tears down vehicles and advises automakers on how to improve them, said the Model Y battery pack has “zero repairability.” 

“A Tesla structural battery pack is going straight to the grinder,” Munro said. 

EV battery problems also expose a hole in the green “circular economy” touted by carmakers. 

At Synetiq, the UK’s largest salvage company, head of operations Michael Hill said over the last 12 months the number of EVs in the isolation bay – where they must be checked to avoid fire risk – at the firm’s Doncaster yard has soared, from perhaps a dozen every three days to up to 20 per day. 

“We’ve seen a really big shift and it’s across all manufacturers,” Hill said. 

The UK currently has no EV battery recycling facilities, so Synetiq has to remove the batteries from written-off cars and store them in containers. Hill estimated at least 95% of the cells in the hundreds of EV battery packs – and thousands of hybrid battery packs – Synetiq has stored at Doncaster are undamaged and should be reused. 

It already costs more to insure most EVs than traditional cars. 

According to online brokerage Policygenius, the average U.S. monthly EV insurance payment in 2023 is $206, 27% more than for a combustion-engine model. 

According to Bankrate, an online publisher of financial content, U.S. insurers know that “if even a minor accident results in damage to the battery pack … the cost to replace this key component may exceed $15,000.” 

A replacement battery for a Tesla Model 3 can cost up to $20,000, for a vehicle that retails at around $43,000 but depreciates quickly over time. 

Andy Keane, UK commercial motor product manager at French insurer AXA, said expensive replacement batteries “may sometimes make replacing a battery unfeasible.” 

There are a growing number of repair shops specializing in repairing EVs and replacing batteries. In Phoenix, Arizona, Gruber Motor Co has mostly focused on replacing batteries in older Tesla models. 

But insurers cannot access Tesla’s battery data, so they have taken a cautious approach, owner Peter Gruber said. 

“An insurance company is not going to take that risk because they’re facing a lawsuit later on if something happens with that vehicle and they did not total it,” he said. 

‘Pain points’ 

The British government is funding research into EV insurance “pain points” led by Thatcham, Synetiq and insurer LV=. 

Recently adopted EU battery regulations do not specifically address battery repairs, but they did ask the European Commission to encourage standards to “facilitate maintenance, repair and repurposing,” a commission source said. 

Insurers said they know how to fix the problem — make batteries in smaller sections, or modules, that are simpler to fix, and open diagnostics data to third parties to determine battery cell health. 

Individual U.S. insurers declined to comment. 

But Tony Cotto, director of auto and underwriting policy at the National Association of Mutual Insurance Companies, said “consumer access to vehicle-generated data will further enhance driver safety and policyholders’ satisfaction … by facilitating the entire repair process.” 

Lack of access to critical diagnostic data was raised in mid-March in a class action filed against Tesla in U.S. District Court in California. 

Insurers said failure to act will cost consumers. 

EV battery damage makes up just a few percent of Allianz’s motor insurance claims, but 8% of claims costs in Germany, Lauterwasser said. Germany’s insurers pool data on vehicle claims data and adjust premium rates annually. 

“If the cost for a certain model gets higher it will raise premium levels because the rating goes up,” Lauterwasser said. 

(Reporting by Nick Carey and Sarah McFarlane in London, Paul Lienert in Detroit, Gilles Guillaume in Paris and Giulio Piovaccari in MilanAdditional reporting by Victoria Waldersee in BerlinEditing by Ben Klayman and Matthew Lewis) 

via Autoblog https://ift.tt/J9WV1t3

March 20, 2023 at 11:34AM