OpenAI Says Stonemasons Should Be Fine in the Brave New World

https://gizmodo.com/openai-ai-chatbot-coding-writers-jobs-penn-1850244785


A new paper released Monday says that 80% of the U.S. workforce will see the impact of large language models on their work. While some will only experience a moderate amount of impact on their day to day workload, close to 20% of those working today will likely find about half of their tasks automated to some extent by AI.

The paper published by Cornell University was led by several OpenAI researchers working alongside a researcher at nonprofit lab OpenResearch, which is chaired by OpenAI CEO Sam Altman, and a professor at the University of Pennsylvania. With the release of OpenAI’s latest version of its LLM, GPT-4, the company is already promoting how it can score well on tests like the Biology Olympiad, but this report also analyzes likely applications for AI that are capable with current LLM models. AI already has text and code-generation capabilities (even if AI developers are still trying to convince people to not trust content their AI creates), as well as the routinely discussed implications for art, speech, and video.

All in all, the paper strays from making any declarative statements about job impacts. It instead analyzes jobs that are more likely to have some “exposure” to AI generation, meaning it will take 50% less time to complete a job’s common task. All in all, most high-paying white collar work will find AI pushing into their fields. Those in science or “critical thinking” fields, as the paper calls it, will have less exposure, pointing to the modern AI’s utter limitations in creating novel content. Programmers and writers, on the other hand, are likely to see quite a lot of exposure.

Though the paper describes this “exposure” to AI without identifying if it has any real labor-displacing effects, it’s not easy to identify the companies who are already looking at AI as a way to reduce labor costs. CNET’s parent company Red Ventures has been held under the microscope for its use of AI-written articles, after it was discovered how inaccurate the articles often were. Earlier this month, CNET laid off around a dozen staffers. According to The Verge, the editor in chief Connie Guglielmo became the company’s senior VP of AI content strategy.

Of course, there’s a difference between white collar workers playing around with ChatGPT and the workplace demanding workers use a chatbot to automate their work. An IBM and Morning Consult survey from last year said 66% of companies worldwide are either using or exploring AI, and that was before the AI hype train took on more coal at the end of last year with ChatGPT. The survey said 42% of that adoption was driven by the need to reduce costs and “automate key processes.” You can very well argue that programmers and writers are often engaging in that previously mentioned “critical thinking” as much as any other subject, but will some managers and company owners think the same way if they’re told AI can help them reduce head counts?

And of course, your average blue collar job won’t see any real impact. The paper makes special mention of a few of these jobs, making oddly specific mention of derrick operators, cutlers, pile driver operators, and stonemasons. Those without higher education degrees will experience less impact by AI, and some of these unaffected jobs, like short order cooks or dishwashers, are already far on the lower end of the pay scale. It’s still unlikely that these jobs will be re-evaluated to finally think about offering these workers a living wage, even if other jobs may suffer.

And as these models improve over time, the effect will only grow. The U.S. Chamber of Commerce has mentioned some more light-touch regulation, but the President Joe Biden administration’s own Blueprint for an AI Bill of Rights mentions people should be able to “opt out” of AI systems in favor of human alternatives. The bill of rights does not really mention any ways to mitigate the impacts AI could have on the labor market. The report notes the “difficulty” of regulating AI because of its constant shifts, but it shies away from any noted general thoughts lawmakers could follow.

Sure, the paper doesn’t analyze the likelihood of a job being substituted by AI, but it doesn’t take much to get there. The paper describes most jobs by their simple “tasks” rather than their application, which is a problem when you’re trying to discuss whether an AI can perform at the same level as a human. Simply put, at this point, AI-generated content is nowhere near surpassing the quality or originality as human-created content. 

A report from the US-EU Trade and Technology Council published by the White House last December mentions AI can potentially “expos[e] large new swaths of the workforce to potential disruption.” The report mentioned while past automation impacted “routine” tasks, AI can impact “nonroutine” tasks. Now it’s up to employers just how much “nonroutine” tasks they think they still need actual humans to perform.

via Gizmodo https://gizmodo.com

March 20, 2023 at 03:29PM

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.