Skip to main content

Filed under:

From ChatGPT to Gemini: how AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Gemini, and OpenAI, with GPT-4o, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play (and so many name changes — remember when we were talking about Bing and Bard before those tools were rebranded?), but you can be sure to see it all unfold here on The Verge.

  • AI gets notes from a songwriter.

    Responding to the RIAA’s copyright lawsuit, AI songmaker sites defended their models as being like kids learning rock and roll or tools enabling creativity. Country artist Tift Merritt had a different take after being shown a song AI music generator Udio spat out when prompted to mimic her style:

    ... the “imitation” Udio created “doesn’t make the cut for any album of mine.”

    “This is a great demonstration of the extent to which this technology is not transformative at all ... It’s stealing.”

    I had similar thoughts back in March.


  • Meta courts celebs like Awkwafina to voice AI assistants ahead of Meta Connect

    Meta logo on a red background with repeating black icons, giving a squiggly effect.
    Illustration by Nick Barclay / The Verge

    Judi Dench, Keegan-Michael Key, and Awkwafina are among multiple “actors and influencers” whose voices could become part of Meta’s AI offering, Bloomberg reported on Friday. The company is apparently working to wrap up deals quickly so it can develop and show off the new voices at its Meta Connect conference in September.

    Specifically, at least one tool will be “a digital assistant product called MetaAI,” according to multiple unnamed sources in a New York Times report. Meta is negotiating with all of the top talent agencies in Hollywood to secure the voices, the Times writes. And it may pay the actors who sign on “millions of dollars.” Meta doled out similarly fat stacks to the celebrities represented by the recently-discontinued Meta AI chatbots from last year’s Connect.

    Read Article >
  • Reddit CEO says Microsoft needs to pay to search the site

    The Reddit logo over an orange and black background
    Illustration by Alex Castro / The Verge

    After striking deals with Google and OpenAI, Reddit CEO Steve Huffman is calling on Microsoft and others to pay if they want to continue scraping the site’s data.

    “Without these agreements, we don’t have any say or knowledge of how our data is displayed and what it’s used for, which has put us in a position now of blocking folks who haven’t been willing to come to terms with how we’d like our data to be used or not used,” Huffman said in an interview this week. He specifically named Microsoft, Anthropic, and Perplexity for refusing to negotiate, saying it has been “a real pain in the ass to block these companies.”

    Read Article >
  • Meta blames hallucinations after its AI said Trump rally shooting didn’t happen

    A picture of Donald Trump in black and white, wearing a ball cap and jacket with a colorful blue, yellow, and green background with large swirly lines.
    Former President Donald Trump.
    Image: Laura Normand / The Verge

    Meta’s AI assistant incorrectly said that the recent attempted assassination of former President Donald Trump didn’t happen, an error a company executive is now attributing to the technology powering its chatbot and others.

    In a company blog post published on Tuesday, Joel Kaplan, Meta’s global head of policy, calls the responses of its AI to questions about the shooting “unfortunate.” He says Meta AI was first programmed to not respond to questions about the attempted assassination but the company removed that restriction after people started noticing. He also acknowledges that “in a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen – which we are quickly working to address.”

    Read Article >
  • Instagram starts letting people create AI versions of themselves

    A screenshot of Meta’s AI Studio in Instagram.
    AI characters in Instagram.
    Image: Meta

    Meta is opening up the ability for anyone in the US to create AI versions of themselves in Instagram or on the web with a new tool called AI Studio.

    The pitch is that creators and business owners will use these AI profiles to talk to their followers on their behalf. They’ll be able to talk directly with humans in chat threads and respond to comments on their author’s account. Meta says Instagram users in the US can get started with AI Studio via either its website or by starting a new “AI chat” directly in Instagram.

    Read Article >
  • The AI race’s biggest shift yet

    Mark Zuckerberg similing.
    Meta CEO Mark Zuckerberg.
    Illustration by Cath Virginia / The Verge

    The AI race is quickly changing. Focus is shifting away from the models themselves to the products they power, as evidenced by the events of this week.

    First, there was Mark Zuckerberg continuing his scorched-earth campaign to drive down the cost of accessing foundational AI models. Llama 3.1 narrows the gap between the performance of open- and closed-source AI, and Meta claims it costs roughly half as much as OpenAI’s GPT-4o to run. In a video announcing the news, Zuckerberg wore a custom shirt with a quote from Emperor Augustus emblazoned in Latin: “At the age of nineteen, on my own initiative and at my own expense, I raised an army.”

    Read Article >
  • OpenAI’s SearchGPT demo results aren’t actually that helpful.

    The trend of hallucinations showing up in public AI demos continues. As noted by a couple of reporters already, OpenAI’s demo of its new SearchGPT engine shows results that are mostly either wrong or not helpful.

    From The Atlantic’s Matteo Wong:

    In a prerecorded demonstration video accompanying the announcement, a mock user types music festivals in boone north carolina in august into the SearchGPT interface. The tool then pulls up a list of festivals that it states are taking place in Boone this August, the first being An Appalachian Summer Festival, which according to the tool is hosting a series of arts events from July 29 to August 16 of this year. Someone in Boone hoping to buy tickets to one of those concerts, however, would run into trouble. In fact, the festival started on June 29 and will have its final concert on July 27. Instead, July 29–August 16 are the dates for which the festival’s box office will be officially closed. (I confirmed these dates with the festival’s box office.)


  • OpenAI is releasing a prototype of its search engine to rival Google, Perplexity

    Image: OpenAI

    OpenAI is announcing its much-anticipated entry into the search market, SearchGPT, an AI-powered search engine with real-time access to information across the internet.

    The search engine starts with a large textbox that asks the user “What are you looking for?” But rather than returning a plain list of links, SearchGPT tries to organize and make sense of them. In one example from OpenAI, the search engine summarizes its findings on music festivals and then presents short descriptions of the events followed by an attribution link.

    Read Article >
  • Emma Roth

    Jul 24

    Emma Roth

    Bing’s AI redesign shoves the usual list of search results to the side

    Bing logo
    Illustration: The Verge

    Bing’s new search experience puts AI-generated answers front and center while pushing traditional search results to the side. The new layout, which is rolling out for a small number of queries, fills your search results page with AI-generated summaries addressing various aspects of your question.

    Microsoft has shared an early look at what this search experience will look like... and it’s a lot. For the query “What is a spaghetti western?” Bing displays a summary explaining that it’s a “subgenre of western films produced by Italian filmmakers,” along with a series of bullet points of the genre’s key characteristics.

    Read Article >
  • Emma Roth

    Jul 24

    Emma Roth

    Reddit is now blocking major search engines and AI bots — except the ones that pay

    An image showing the Reddit logo on a red and white background
    Illustration: The Verge

    Reddit is ramping up its crackdown on web crawlers. Over the past few weeks, Reddit has started blocking search engines from surfacing recent posts and comments unless the search engine pays up, according to a report from 404 Media.

    Right now, Google is the only mainstream search engine that shows recent results when you search for posts on Reddit using the “site:reddit.com” trick, 404 Media reports. This leaves out Bing, DuckDuckGo, and other alternatives — likely because Google has struck a $60 million deal that lets the company train its AI models on content from Reddit.

    Read Article >
  • Emma Roth

    Jul 23

    Emma Roth

    AI is catching the attention of antitrust watchdogs around the globe.

    Alongside the FTC and the DOJ, the UK and EU’s antitrust authorities have issued a joint statement saying they will work to ensure fair competition in the AI industry.

    One potential issue highlighted by the enforcers is the possibility that AI chipmakers could “exploit existing or emerging bottlenecks,” giving them “outsized influence over the future development” of AI tools.


  • Wes Davis

    Jul 23

    Wes Davis

    A look at Meta AI running on a Quest 3 headset.

    Demos on this Meta blog show how the company will implement its promise to bring AI to its VR headsets. Like the company’s Ray-Ban smart glasses, you can ask it questions about things you see (in passthrough), and it will answer.

    The experimental feature rolls out in English next month, in the US and Canadia (excluding the Quest 2).


  • Meta releases the biggest and best open-source AI model yet

    Vector illustration of the Meta logo.
    Image: Cath Virginia / The Verge

    Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.

    Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.

    Read Article >
  • AI is confusing — here’s your cheat sheet

    Illustration of a computer teaching other computers how to learn.
    Image: Hugo J. Herrera for The Verge

    Artificial intelligence is the hot new thing in tech — it feels like every company is talking about how it’s making strides by using or developing AI. But the field of AI is also so filled with jargon that it can be remarkably difficult to understand what’s actually happening with each new development.

    To help you better understand what’s going on, we’ve put together a list of some of the most common AI terms. We’ll do our best to explain what they mean and why they’re important.

    Read Article >
  • Figma explains how its AI tool ripped off Apple’s design

    Vector illustration of the Figma logo.
    Image: Cath Virginia / The Verge

    Figma recently pulled its “Make Designs” generative AI tool after a user discovered that asking it to design a weather app would spit out something suspiciously similar to Apple’s weather app — a result that could, among other things, land a user in legal trouble. This also suggested that Figma may have trained the feature on Apple’s designs, and while CEO Dylan Field was quick to say that the company didn’t train the tool on Figma content or app designs, the company has now released a full statement in a company blog post.

    The statement says that Figma “carefully reviewed” Make Designs’ underlying design systems during development and as part of a private beta. “But in the week leading up to Config, new components and example screens were added that we simply didn’t vet carefully enough,” writes Noah Levin, Figma VP of product design. “A few of those assets were similar to aspects of real world applications, and appeared in the output of the feature with certain prompts.”

    Read Article >
  • Emma Roth

    Jul 18

    Emma Roth

    The biggest names in AI have teamed up to promote AI security

    An image showing a repeating pattern of brain illustrations
    Illustration: Alex Castro / The Verge

    Google, OpenAI, Microsoft, Amazon, Nvidia, Intel, and other big names in AI are coming together to form the Coalition for Secure AI (CoSAI), according to an announcement on Thursday. The initiative aims to address a “fragmented landscape of AI security” by providing access to open-source methodologies, frameworks, and tools.

    We don’t know how much of an impact CoSAI will have on the AI industry, but concerns about leaking confidential information and automated discrimination come to mind as examples of questions about the security, privacy, and safety of generative AI technology.

    Read Article >
  • Anthropic launched an Android app for its Claude AI chatbot.

    You can grab the app from Google Play right now. It’s free and “accessible with all plans, including Pro and Team,” the company says in a blog post.

    Anthropic released an iOS app in May.


  • The pizza part sounds pretty cool.

    I wasn’t expecting to read a dystopian fic about not-so-distant future office culture in our comments, but what other response could you have to a story about an HR company that wanted to treat AI bots like humans?


  • Mia Sato

    Jul 16

    Mia Sato

    Apple, Anthropic, and other companies used YouTube videos to train AI

    YouTube’s logo with geometric design in the background
    Illustration by Alex Castro / The Verge

    More than 170,000 YouTube videos are part of a massive dataset that was used to train AI systems for some of the biggest technology companies, according to an investigation by Proof News and copublished with Wired. Apple, Anthropic, Nvidia, and Salesforce are among the tech firms that used the “YouTube Subtitles” data that was ripped from the video platform without permission. The training dataset is a collection of subtitles taken from YouTube videos belonging to more than 48,000 channels — it does not include imagery from the videos.

    Videos from popular creators like MrBeast and Marques Brownlee appear in the dataset, as do clips from news outlets like ABC News, the BBC, and The New York Times. More than 100 videos from The Verge appear in the dataset, along with many other videos from Vox.

    Read Article >
  • Google tests out Gemini AI-created video presentations

    A screenshot of the Google Vids UI.
    Image: Google

    Google is launching its new Vids productivity app in Workspace Labs with the idea that “if you can make a slide, you can make a video in Vids.” Announced in April, Vids allows users to drop docs, slides, voiceovers, and video recordings into a timeline to create a presentation video to share with coworkers. Making it available in the Workspace Labs preview allows Workspace admins to opt in users to try out the AI-powered video maker.

    While you can generate video in Vids, it’s not to be confused with AI tools like OpenAI’s Sora, which can create lifelike footage from a prompt. Instead, Vids is about generating a presentation by describing what you want Gemini to create and then letting you alter the video afterward.

    Read Article >
  • Emma Roth

    Jul 12

    Emma Roth

    Amazon’s AI shopping assistant rolls out to all users in the US

    An image showing Amazon’s AI shopping assistant, Rufus
    Image: Amazon

    Amazon’s AI shopping assistant, Rufus, is rolling out to all users in the US on Amazon’s mobile app. You can pull up the shopping assistant by tapping the orange and blue icon in the right corner of the app’s navigation bar, where Rufus can answer questions, draw comparisons between items, and give you updates on your order.

    Amazon first introduced Rufus in February but only made it available to a small group of users. Rufus uses Amazon’s product listing details, reviews, and community Q&As, along with some information from the web, to inform its answers.

    Read Article >
  • Early Apple tech bloggers are shocked to find their name and work have been AI-zombified

    A TUAW website author profile for a Christina Warren, with her bio.
    Christina Warren hasn’t worked at this website since 2009, and that’s not her face.
    Screenshot by Christina Warren

    An old Apple blog and the blog’s former authors have become the latest victims of AI-written sludge. TUAW (“The Unofficial Apple Weblog”) was shut down by AOL in 2015, but this past year, a new owner scooped up the domain and began posting articles under the bylines of former writers who haven’t worked there for over a decade. And that new owner, which also appears to run other AI sludge websites, seems to be trying to hide.

    Christina Warren, who left a long career in tech journalism to join Microsoft and later GitHub as a developer advocate, shared screenshots of what was happening on Tuesday. In the images, you can see that Warren has apparently been writing new posts as of this July — even though she hasn’t worked at TUAW since 2009, she confirms to The Verge.

    Read Article >
  • OpenAI partners with Los Alamos National Laboratory

    OpenAI announced that it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research. I’m a bit disappointed because this was the plot of the science fiction horror book I always wanted to write.

    The goal is to test how GPT-4o can help scientists perform tasks in a lab using vision and voice modalities.


  • The Washington Post made an AI chatbot for questions about climate

    A screenshot of The Washington Post’s Climate Answers AI chatbot
    Image: The Washington Post

    The Washington Post is sticking a new climate-focused AI chatbot inside its homepage, app, and articles. The experimental tool, called Climate Answers, will use the outlet’s breadth of reporting to answer questions about climate change, the environment, sustainable energy, and more.

    Some of the questions you can ask the chatbot include things like, “Should I get solar panels for my home?” or “Where in the US are sea levels rising the fastest?” Much like the other AI chatbots we’ve seen, it will then serve up a summary using the information it’s been trained on. In this case, Climate Answers uses the articles within The Washington Post’s climate section — as far back as the section’s launch in 2016 — to answer questions.

    Read Article >
  • When AI models are past their prime.

    A recent study found that if a coding problem put before ChatGPT (using GPT-3.5) existed on coding practice site LeetCode before its 2021 training data cutoff, it did a very good job generating functional solutions, writes IEEE Spectrum.

    But when the problem was added after 2021, it sometimes didn’t even understand the questions and its success rate seemed to fall off a cliff, underscoring AI’s limitations without enough data.