X

Perplexity AI Results Include Plagiarism and Made-Up Content, Reports Say

Investigations by Wired and Forbes say the AI-powered search engine is stealing publications' work. The company behind the tool has pushed back.

Ian Sherr Contributor and Former Editor at Large / News
Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he's always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer -- the kind with swords -- and began woodworking during the pandemic.
Ian Sherr
4 min read
The Perplexity logo is shown on a phone screen.

Publications have raised questions about the artificial intelligence startup Perplexity.

James Martin/CNET

The artificial intelligence startup Perplexity AI has raised $165 million in an effort to upend the online search industry led by Google. Now Wired and Forbes are reporting that investigations have found that Perplexity is stealing content from them and likely from other publications to feed its "answer engine."

In a report published Wednesday, Wired said "it's likely" Perplexity visited its site and others owned by its parent company, Condé Nast, "thousands of times without permission." Wired then asked Perplexity's chatbot about unique stories the AI shouldn't have access to, and the publication received summaries regardless, some of which were inaccurate.

The story comes after Forbes last week accused Perplexity of ripping off its original reporting without appropriate attribution, a move Perplexity's CEO initially referred to as involving "rough edges" of his company's technology. Axios reported late Tuesday that Forbes has now threatened Perplexity with legal action.

ai-atlas-tag.png

Perplexity's CEO told the Associated Press that the company "never ripped off content from anybody" and that instead his company is an "aggregator of information." A company representative declined to comment further to CNET, other than to say Perplexity is "committed to finding ways to partner with media companies to create aligned incentives for all stakeholders." The representative also said Perplexity is developing a revenue-sharing program for media companies, along with free access to its tools.

The reporting on Perplexity is a flashpoint in a growing debate about AI and how it's being used around the web. Industry analysts and longtime tech watchers have raised concerns that tech companies large and small are adding AI tools to their products at too rapid a pace. (See CNET's AI Atlas for comprehensive coverage of what's happening in artificial intelligence, including reviews of Perplexity, ChatGPT, Gemini and other chatbots.)

Others, now including Wired and Forbes, have raised questions about how AI companies are building their technologies, and whether they've taken copyrighted works without permission. 

The New York Times sued OpenAI and its partner, Microsoft, late last year, saying the companies used its articles to "train chatbots that now compete with it." OpenAI responded at the time that the suit is "without merit" and that the company believes its systems are protected under "fair use" provisions in copyright law. OpenAI told UK lawmakers earlier this year that because copyright law covers blog posts, photographs, forum posts, code and documents, "it would be impossible to train today's leading AI models without using copyrighted materials."

Getty Images similarly filed a suit against Stability AI last year, alleging that the image-generation startup had copied more than 12 million photographs from its collection without permission, "as part of its efforts to build a competing business." Stability AI reportedly acknowledged that Getty Images were used to train its AI, a process involving "temporary copying" to create its service, but it said the service generates "new and original synthetic images."

The tech industry's aggressive approach to AI has already undercut core services across the internet. A dramatic example is Google's new AI Overviews feature, meant to summarize search results that billions of people rely on to seek out facts around the web. Within days of its launch, Google's service was found to be spreading racist conspiracy theories and dangerous health recommendations, including the suggestion to add glue to pizza and eat rocks as part of a healthy diet. (Google has since pumped the brakes on its AI summaries, though some users are still reporting egregious errors.)

Adobe, meanwhile, faced backlash from users who worried that a new version of its terms of service entitled the company to use their work to help train its AI engines without explicit permission. Adobe on Tuesday released new terms of service agreements promising it will not use customer work unless it's submitted to the Adobe Stock marketplace.

The accusations from Wired and Forbes aren't the only concerns raised about Perplexity. The AP separately reported that another Perplexity product invented a fake quote from a real person. Perplexity's CEO told the news outlet that this involved "a minor use case" and that this product is "more prone to hallucinations" because it's not tied to the web search capabilities of the company's main product.

Wired also said Perplexity falsely claimed that Wired published a story saying a specific California police officer had stolen bicycles from a community member's garage. (Wired also said the police department thanked the publication for "correcting the record" about Perplexity's false claim.) Wired said Perplexity's CEO told the publication in a statement that Wired had misunderstood how its product and the internet work but that the statement "did not dispute the specifics of Wired's reporting."