© 2024 Lakeshore Public Media
8625 Indiana Place
Merrillville, IN 46410
(219)756-5656
Public Broadcasting for Northwest Indiana & Chicagoland since 1987
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

10 reasons why AI may be overrated

The logo of the ChatGPT application developed by U.S. artificial intelligence research organization OpenAI on a smartphone screen and the letters "AI" on a laptop screen.
Kirill Kudryavtsev
/
AFP via Getty Images
The logo of the ChatGPT application developed by U.S. artificial intelligence research organization OpenAI on a smartphone screen and the letters "AI" on a laptop screen.

Is artificial intelligence overrated? Ever since ChatGPT heralded the explosion of generative AI in late 2022, the technology has seen incredible hype in the industry and media. And countless investors have poured billions and billions of dollars into it and related companies.

But a growing chorus of naysayers is expressing doubts about how game-changing generative AI will actually be for the economy.

The discord over AI recently inspired a two-part series on our daily podcast, The Indicator from Planet Money. Co-host Darian Woods and I decided to debate the question: Is AI overrated or underrated?

Because there is quite a bit of uncertainty over how much AI will ultimately affect the economy — and because neither of us really wanted to regret making dumb prognostications — we chose to obscure our personal opinions on the matter. We flipped an AI-generated coin to determine which side of this debate each of us would take. I got "AI is overrated."

I spoke to Massachusetts Institute of Technology economist Daron Acemoglu, who has emerged as one of AI's leading skeptics. I asked Acemoglu whether he thought generative AI would usher in revolutionary changes to the economy within the next decade.

"No. No. Definitely not," Acemoglu said. "I mean, unless you count a lot of companies over-investing in generative AI and then regretting it, a revolutionary change."

Ouch. That implies we've seen a massive financial bubble inflate before our very eyes (note that this interview was conducted before the recent stock market plunge, which may or may not have something do with expectations about AI).

So why might AI be overrated? To make my polemical case, I ended up assembling a pretty long list of reasons. We couldn't fit it all in a short episode. So we decided to provide here a fuller list of reasons that AI may be overrated (complete with strongly worded arguments). Here you go:

Reason 1: The artificial intelligence we have now isn't actually that intelligent.

When you first use something like ChatGPT, it might seem like magic. Like, "Wow, a real thinking machine able to answer questions about anything."

But when you look behind the curtains, it's more like a magic trick. These chatbots are a fancy way of aggregating the internet and then spitting out a mishmash of what they find. Put simply, they're copycats or, at least, fundamentally dependent on mimicking past human work and not capable of generating great new ideas.

And perhaps the worst part is that much of the stuff that AI is copying is copyrighted. AI companies took people's work and fed it into their machines, often without authorization. You could argue it's like systematic plagiarism.

That's why there are at least 15 high-profile lawsuits against AI companies asserting copyright infringement. In one case, The New York Times v. OpenAI, the evidence suggests that, in some instances, ChatGPT literally spit out passages of news articles verbatim without attribution.

Fearing that this really is a violation of copyright law, AI companies have begun paying media companies for their content. At the same time, many other companies have been taking actions to prevent AI companies from harvesting their data. This could pose a big problem for these AI models, which rely on human-generated data to cosplay as thinking machines.

The reality is that generative AI is nowhere near the holy grail of AI researchers — what's known as artificial general intelligence (AGI). What we have now, well, is way more lame. As the technologist Dirk Hohndel has said, these models are just "autocorrect on steroids." They are statistical models for prediction based on patterns found in data. Sure, that can have some cool and impressive applications. But "artificial pattern spotter" — or the more traditional "machine learning" moniker — seems like a better description than "artificial intelligence."

These systems don't have judgment or reasoning. They have a hard time doing basic things like math. They don't know right from wrong. They don't know true from false.

Which brings us to …

Reason 2: AI lies.

The AI industry and the media have come to call AI-generated falsehoods and errors "hallucinations." But like the term "artificial intelligence," that might be a misnomer. Because that makes it sound like it, you know, works well almost always — and then every once in a while, it likes to drink some ayahuasca or eat some mushrooms, and then it says some trippy, made-up stuff.

But AI hallucinations seem to be more common than that (and, to be fair, a growing number of folks have begun calling them "confabulations"). One study suggests that AI chatbots hallucinate — or confabulate — somewhere between 3% and 27% of the time. Whoa, looks like AI should lay off the ayahuasca.

AI hallucinations have been creating embarrassments for companies. For example, Google recently had to revamp its "AI Overviews" feature after it started making ridiculous errors, like telling users that they should put glue in pizza sauce and that it was healthy to eat rocks. Why did it recommend that people eat rocks? Probably because it had an article from the satirical website The Onion in its training data. Because these systems aren't actually intelligent, that tripped it up.

Hallucinations make these systems unreliable. The industry is taking this seriously and working to reduce errors. There may be some progress on that front. But — because these models don't know true from false and just mindlessly spit out words based on patterns in data — many AI researchers and technologists out there believe we won't be able to fix the problem of hallucinations anytime soon, if not ever, with these models.

Reason 3: Because AI isn't very intelligent and hallucinations make it unreliable, it's proving incapable of doing most — if not all — human jobs.

I recently reported a story that asked, “If AI is so good, why are there still so many jobs for translators?” Language translation has been at the sort of vanguard of AI research and development for close to a decade or more. And some have predicted that translator jobs would be among the first to be automated away.

But despite advances in AI, the data suggests that jobs for human translators and interpreters are actually growing. Sure, translators are increasingly using AI as a tool at their jobs. But my reporting revealed that AI is just not smart enough, not socially aware enough and not reliable enough to replace humans most of the time.

And this seems to be true for a whole host of other jobs.

For example, drive-through attendants. For close to three years, McDonald's piloted a program to use AI at some of its drive-throughs. It became a bit of an embarrassment. A bunch of viral videos showed AI making bizarre errors: like trying to add $222 worth of chicken nuggets to someone's order and adding bacon to someone's ice cream.

I like how New York Times journalist Julia Angwin put it. Generative AI, she says, "could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests. Companies that can get by with Roomba-quality work will, of course, still try to replace workers. But in workplaces where quality matters … A.I. may not make significant inroads."

Reason 4: AI's capabilities have been exaggerated.

You may remember news stories from last year proclaiming that AI did really well on the Uniform Bar Exam for lawyers. OpenAI, the company behind ChatGPT, claimed that GPT-4 scored in the 90th percentile. But while at MIT, researcher Eric Martinez dug deeper. He found that it scored only in the 48th percentile. Is that actually impressive when these systems, with their ample training data, have the equivalent of a Google search at their fingertips? Heck, maybe even I could score that well if I had access to previous bar exams and other ways to cheat.

Google, meanwhile, claimed that its AI was able to unearth more than 2 million chemical compounds previously unknown to science. But researchers at the University of California, Santa Barbara found that this was mostly bogus. Maybe the study is wrong, or, more likely, maybe the AI industry is overhyping their products' capabilities.

Even more alarming, AI was really touted as being incredible at writing computer code. Like jobs for translators, jobs for computer coders were supposedly in jeopardy because AI was so good at coding. But researchers have found that much of the code that AI generates is not very good. Sure, AI is making coders more productive. But the quality seems to be going down. One study from researchers at Stanford University found that coders who used AI assistants "wrote significantly less secure code." Researchers at Bilkent University found that more than 30% of AI-generated code was incorrect and another 23% of it was partially incorrect.

A recent poll of developers found that roughly half of them had concerns about the quality and security of AI-generated code.

Reason 5: Despite all the media and investor mania about AI over the last few years, AI use remains surprisingly limited.

In a recent study, the U.S. Census Bureau found that only around 5% of businesses had used AI in the previous couple of weeks. Which relates to ...

Reason 6: We have yet to find AI's killer app.

The relatively small percentage of companies that are actually using AI don't seem to be using it in a way that is going to have profound benefits for our economy. Some are experimenting with it. But of those that have incorporated it into their day-to-day business, it's mostly in things like personalized marketing and automated customer service. Not very exciting.

In fact, I don't know about you, but I'd rather talk to a human customer service agent than a chatbot. Acemoglu has called this sort of automation "so-so automation," where companies replace humans with machines not because they're better or more productive but because it saves them money. Like self-checkout kiosks at grocery stores, AI chatbots in customer service often just shift more work to customers. It can be frustrating.

So, yeah, we're not seeing a killer app for AI yet. Actually, it's feasible that the most impactful real-world applications of AI will be scams, misinformation and threatening democracy. Overrated!

Reason 7: Productivity growth remains super disappointing. And generative AI may not help it get much better anytime soon.

If AI was really revolutionizing the economy, we'd likely see a surge in productivity growth and an increase in unemployment. But the surge in productivity growth is nowhere to be seen. And unemployment is at near-record lows. Even for the white-collar jobs that AI is most likely to affect, we're not seeing evidence of AI killing them.

While generative AI may be incapable of replacing humans in most or virtually all jobs, it clearly can help humans in some professions as an information tool. And, you might say, its productivity benefits could take time to filter throughout the economy.

But there are good reasons to believe that generative AI won't revolutionize our economy anytime soon.

In a recent paper, Acemoglu estimated generative AI's potential effects on the economy over the next decade. "The paper was written out of a belief that some of the effects of AI are being exaggerated," Acemoglu says.

First off, Acemoglu says, there are just humongous chunks of the economy that generative AI will barely touch. Construction, food and accommodations, factories and so on. Generative AI, in Acemoglu's view, will be unable to do most tasks outside of an office within the next decade. (Note that generative AI is distinct from the technology behind self-driving cars, what's known as "reinforcement learning." Acemoglu says he has little doubt that self-driving cars are coming, but he's unsure about the timeline. His focus in this recent paper is new AI advances that have captured our collective imagination over the last couple of years.)

Then Acemoglu narrows in on office work and finds that there are just a whole bunch of tasks that current AI models are incapable of doing. They're just too dumb and unreliable. At best, they're proving to be just a tool that office workers can use to — maybe — become slightly better at their jobs. Acemoglu finds that AI will impact less than 5% of human tasks in the economy. Less than 5%! And, here, there will be only some mild cost savings.

In the end, Acemoglu predicts that generative AI won't boost productivity or economic growth much within the next decade. He estimates that, at best, it could increase gross domestic product by around 1.5% over 10 years. That's "nothing to be sneered at," Acemoglu says. "But it's not revolutionary in any shape or form."

Reason 8: AI may not be improving as fast as many people claim it is. In fact, AI may be running out of juice.

Whenever we talk about AI, the conversation always seems to turn to the future.

Like, sure, it’s not that good yet. But in a few years, we're all gonna be out of work and bowing down to our robot overlords or whatever. But where is the evidence that points to that? Is this just our collective conditioning by science fiction movies?

There has been a lot of talk about AI improving really fast. Some claim it's getting exponentially better. Others even claim these models — highfalutin autocomplete — are the road to AGI, or artificial superintelligence.

But there are serious questions about all of this. In fact, evidence suggests that the rate of progress in AI may be slowing down.

First, progress in making these models better has depended, in large part, on throwing lots and lots of data at them. One big problem: They've already basically consumed the entire internet.

And, as already stated, that included consuming a bunch of copyrighted works. What happens if the courts say, “No way, you can't just use copyrighted data without authorization?"

Meanwhile, companies, annoyed by AI's penchant for expropriating their data, have started restricting use of their data. One group of researchers recently called it an "emerging crisis in consent."

Still more, there are questions about the quality of the data in these systems. Maybe sites like The Onion and 4chan, while helping these systems mimic online humans, may not help them have real, beneficial applications in the economy.

But even if AI companies get over these humps, there's the reality that there's only so much data out there. Researchers are scrambling to figure out ways to get more data. They're talking about things like creating "synthetic data" and so on. But progress on this front is a big question mark.

Second, there's a scarcity of the special microchips needed to power AI. That's another huge cost and headache for AI companies. Sam Altman, the CEO of OpenAI, has been trying to convince investors to fork over trillions of dollars — trillions! — to revamp the global semiconductor industry and make other investments to improve ChatGPT. Is it worth it? Will investors actually get their money back? I dunno.

Third, the data centers that power AI require an ungodly amount of electricity. This is a huge cost for these companies. Are they going to be able to recoup the money it takes to build and power all these data centers? Will consumers be willing to pay the high cost of running AI? It's a fundamental problem with these companies' business model. But it's also a fundamental problem for America's electricity grid and the environment.

Reason 9: AI could be really bad for the environment.

AI already consumes enough energy to power a small country. Researchers at Goldman Sachs found that "the proliferation of generative AI technology — and the data centers needed to feed it — is set to drive an increase in US power demand not seen in a generation."

"One of the silliest things around a couple of years ago was this idea that AI would help solve the climate change problem," Acemoglu says. "I never understood exactly how. But, you know, it's clear it's gonna do something to climate change, but it's not on the positive side."

Reason 10: AI is overrated because humans are underrated.

When I asked Acemoglu for his top reasons why AI was overrated, he told me something that warmed my heart — a feeling that dumb "artificial intelligence" could never experience.

Acemoglu told me he believed AI is overrated because humans are underrated. "So a lot of people in the industry don't recognize how versatile, talented, multifaceted human skills and capabilities are," Acemoglu says. "And once you do that, you tend to overrate machines ahead of humans and underrate the humans."

Go, Team Human!

***

Major caveat to all of the above: I've made the strongest case against generative AI that I could make because that was my assignment (thanks to an AI-generated coin flip).

There are countless investors and technologists and economists out there who are bullish on this technology (for some of those arguments, listen to my colleague Darian Woods' episode on why AI is underrated, or read some of my previous newsletters that probe potential upsides and benefits of AI technology). 

Going forward, I will go back to being less derisive and more open-minded to the pros and cons of this technology — at least until our AI robot overlords take over the Planet Money newsletter and destroy my livelihood.

Copyright 2024 NPR

Since 2018, Greg Rosalsky has been a writer and reporter at NPR's Planet Money.