Government & Policy

Is ChatGPT a ‘virus that has been released into the wild’?

Comment

illustration of human brain
Image Credits: Just_Super

More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so great that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering — Musk has repeatedly called all organizations developing AI to be regulated — Altman talked about the dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”

The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as intelligent as people, the tech that OpenAI has since released is taking many aback (including Musk), with some critics fearful that it could be our undoing, especially with more sophisticated tech reportedly coming soon.

Indeed, though heavy users insist it’s not so smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are trying to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.

Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.”

We talked with him yesterday about some of his concerns, and why he thinks OpenAI is driving what he believes is the “most disruptive change the U.S. economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?

PK: I’ve played with these conversational user interfaces and AI services in the past and this obviously is a huge leap beyond. And what troubled me here in particular is the casual brutality of it, with massive consequences for a host of different activities. It’s not just the obvious ones, like high school essay writing, but across pretty much any domain where there’s a grammar — [meaning] an organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this voracious beast and spit back out again without compensation to whatever was used for training it.

I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.

It does feel like it could eat up the world.

Some might say, ‘Well, did you feel the same way when automation arrived in auto plants and auto workers were put out of work? Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies are self catalyzing; they’re learning from the requests. So robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything going inside the factory, moving across sector by sector, whereas that’s exactly not only what we can expect but what you should expect.

Musk left OpenAI partly over disagreements about the company’s development, he said in 2019, and he has been talking about AI as an existential threat for a long time. But people carped that he didn’t know what he’s talking about. Now we’re confronting this powerful tech and it’s not clear who steps in to address it.

I think it’s going to start out in a bunch of places at once, most of which will look really clumsy, and people will [then] sneer because that’s what technologists do. But too bad, because we’ve walked ourselves into this by creating something with such consequentiality. So in the same way that the FTC demanded that people running blogs years ago [make clear they] have affiliate links and make money from them, I think at a trivial level, people are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’ [Editor’s note: OpenAI says it’s working on a way to “watermark” AI-generated content, along with other “provenance techniques.”]

I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms. I think there’s going to be a broader DMCA issue here with respect to this service.

And I think there’s the potential for a [massive] lawsuit and settlement eventually with respect to the consequences of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] with respect to these technologies.

What’s the thinking at MIT?

Andy McAfee and his group over there are more sanguine and have a more orthodox view out there that anytime we see disruption, other opportunities get created, people are mobile, they move from place to place and from occupation to occupation, and we shouldn’t be so hidebound that we think this particular evolution of technology is the one around which we can’t mutate and migrate. And I think that’s broadly true.

But the lesson of the last five years in particular has been these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we all told ourselves as economists looking at this that the economy will adapt, and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about high school and college essay writing. One of our kids has already asked — theoretically! — if it would be plagiarism to use ChatGPT to author a paper.

The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework assignments because we no longer know whether they’re cheating or not, that means that everything has to happen in the classroom and must be supervised. There can’t be anything we take home. More stuff must be done orally, and what does that mean? It means school just became much more expensive, much more artisanal, much smaller and at the exact time that we’re trying to do the opposite. The consequences for higher education are devastating in terms of actually delivering a service anymore.

What do you think of the idea of universal basic income, or enabling everyone to participate in the gains from AI?

I’m much less strong a proponent than I was pre COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they came up with QAnon. So I’m really nervous about what happens whenever people don’t have to hop in a car, drive somewhere, do a job they hate and come home again, because the devil finds work for idle hands, and there’ll be a lot of idle hands and a lot of deviltry.

More TechCrunch

Apple has published a technical paper detailing the models that it developed to power Apple Intelligence, the range of generative AI features headed to iOS, macOS and iPadOS over the…

Apple says it took a ‘responsible’ approach to training its Apple Intelligence models

A fireside chat on Monday between Nvidia CEO Jensen Huang and Meta CEO Mark Zuckerberg at the SIGGRAPH 2024 conference in Colorado took a few unexpected turns. It started innocently…

Huang and Zuckerberg swapped jackets at SIGGRAPH 2024 and things got weird

Meta’s machine learning model, Segment Anything, has a sequel: It now takes the model to the video domain, showing how fast the field is moving.

Zuckerberg touts Meta’s latest video vision AI with Nvidia CEO Jensen Huang

Featured Article

The fall of EV startup Fisker: A comprehensive timeline

Here is a timeline of the events that led fledgling automaker Fisker to file for bankruptcy.

The fall of EV startup Fisker: A comprehensive timeline

Hello, and welcome back to TechCrunch Space. In case you missed it, Boeing and NASA decided to keep Starliner docked to the International Space Station for the rest of the…

TechCrunch Space: Catching stars

As failed EV startup Fisker winds its way through bankruptcy, a persistent and tricky question has become a flashpoint of the proceedings: does its only secured lender, Heights Capital Management,…

The question haunting Fisker’s bankruptcy

So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But…

Making AI models ‘forget’ undesirable data hurts their performance

Uber is now letting riders in India book up to three rides simultaneously.

Uber now lets users in India book three trips at once

U.S. airports are rolling out facial recognition to scan travelers’ faces before boarding their flights. Americans, at least, can opt out. 

How to opt out of facial recognition at airports (if you’re American)

The promise of AI and large language models (LLMs) is the ability to understand increasingly wider amounts of context and make sense of that information easily, so it makes sense…

Bee AI raises $7M for its wearable AI assistant that learns from your conversations

Featured Article

DEI backlash: Stay up-to-date on the latest legal and corporate challenges

It’s clear that this year will be a turning point for DEI.

DEI backlash: Stay up-to-date on the latest legal and corporate challenges

Bike-taxi startup Rapido, which counts Swiggy among its investors, is the latest Indian firm to become a unicorn.

India’s Rapido becomes a unicorn with fresh $120M funding

Government websites aren’t known for cutting-edge tech. GovWell co-founder and CTO Ben Cohen discovered this while trying to help his dad, a contractor, apply for building permits. Cohen worked as…

GovWell is bringing automation and efficiency to local governments

Critics have long argued that wararantless device searches at the U.S. border are unconstitutional and violate the Fourth Amendment.

US border agents must get warrant before cell phone searches, federal court rules

Featured Article

UK’s Zapp EV plans to expand globally with an early start in India

Zapp is launching its urban electric two-wheeler in India in 2025 as it plans to expand globally.

UK’s Zapp EV plans to expand globally with an early start in India

The first time I saw Google’s latest commercial, I wondered, “Is it just me, or is this kind of bad?” By the fourth or fifth time I saw it, I’d…

Dear Google, who wants an AI-written fan letter?

Featured Article

MatPat, the first big YouTuber to successfully exit his company, is lobbying for creators on Capitol Hill

Though MatPat retired from YouTube, he’s still pretty busy. In fact, he’s been spending a lot of time on Capitol Hill.

MatPat, the first big YouTuber to successfully exit his company, is lobbying for creators on Capitol Hill

Featured Article

A tale of two foldables

Samsung is still foldables’ 500-pound gorilla, but the company successes have made the category significantly less lonely in recent years.

A tale of two foldables

The California Department of Motor Vehicles this week granted Nuro approval to test its third-generation R3 autonomous delivery vehicle in four Bay Area cities, giving the AV startup a positive…

Autonomous delivery startup Nuro is gearing up for a comeback

With Ghostery turning 15 years old this month, TechCrunch caught up with CEO Jean-Paul Schmetz to discuss the company’s strategy and the state of ad tracking.

Ghostery’s CEO says regulation won’t save us from ad trackers

Two years ago, workers at an Apple Store in Towson, Maryland, were the first to establish a formally recognized union at an Apple retail store in the United States. Now…

Apple reaches its first contract agreement with a US retail union

OpenAI is testing SearchGPT, a new AI search experience to compete directly with Google. The feature aims to elevate search queries with “timely answers” from across the internet and allows…

OpenAI comes for Google with SearchGPT

Indian cryptocurrency exchange WazirX announced on Saturday a controversial plan to “socialize” the $230 million loss from its recent security breach among all its customers, a move that has sent…

WazirX to ‘socialize’ $230M security breach loss among customers

Featured Article

Stay up-to-date on the amount of venture dollars going to underrepresented founders

Stay up-to-date on the latest funding news for Black and women founders.

Stay up-to-date on the amount of venture dollars going to underrepresented founders

The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, has re-released a…

NIST releases a tool for testing AI model risk

Featured Article

Max Space reinvents expandable habitats with a 17th-century twist, launching in 2026

Max Space’s expandable habitats promise to be larger, stronger, and more versatile than anything like them ever launched, not to mention cheaper and lighter by far than a solid, machined structure.

Max Space reinvents expandable habitats with a 17th-century twist, launching in 2026

Payments giant Stripe has acquired a four-year-old competitor, Lemon Squeezy, the latter company announced Friday. Terms of the deal were not disclosed. As a merchant of record, Lemon Squeezy calculates…

Stripe acquires payment processing startup Lemon Squeezy

iCloud Private Relay has not been working for some Apple users across major markets, including the U.S., Europe, India and Japan.

Apple reports iCloud Private Relay global outages for some users

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. To get Startups Weekly in your inbox every Friday, sign up here. This…

Legal tech, VC brawls and saying no to big offers

Apple joins 15 other tech companies — including Google, Meta, Microsoft and OpenAI — that committed to the White House’s rules for developing generative AI.

Apple signs the White House’s commitment to AI safety