Phil Lee’s Post

View profile for Phil Lee, graphic

Managing Director, Digiphile - Data advice that is Simple. Strategic. Actionable.

Under Article 50(2) of the #AIAct, providers of generative AI systems have to ensure that any audio, image, video or text content they generate are "marked in a machine-readable format and detectable as artificially generated or manipulated". We can expect regulatory guidance in the form of Codes of Practice to be produced by the AI Office. It'll still be a couple of years before this provision comes into effect, but I saw an early indication of how it is likely to work just yesterday, following a post I had shared (https://lnkd.in/emWzK27d) in which I used a cartoon graphic of a robot standing next to an explosion, which I'd generated using ChatGPT. When I looked at the post later in the day, I noticed it sported a little "CR" icon, in its top left corner. Curious as to what this was, I clicked on it and it opened a "Content credentials" dialogue box, indicating that the image had been generated by OpenAI. What's most remarkable about this is that I didn't download the image from ChatGPT (because the download was in a .webp format, not recognised by LinkedIn), but had instead screenshot it and saved it as .png file. Despite this, LinkedIn still recognised it as AI generated. Following links in this content credentials dialogue box took me to a page explaining that LinkedIn has adopted the  Coalition for Content Provenance and Authenticity (C2PA) to help identify AI-generated content, where it has been "cryptographically signed using C2PA Content Credentials".  The goal of C2PA "is to enable consumers to trace the source and authenticity of media content, including when generative AI use is detected.". If you're interested to read more, see here: https://lnkd.in/euXPjUvm So this is what the future for AI generative content labelling will likely look like, and you should expect to see this type of label cropping up more often across digital media. In the meantime, good on OpenAI and LinkedIn for proactively adopting these measures now, and not just waiting until it became a legal obligation.

  • No alternative text description for this image
  • No alternative text description for this image
Mateusz Łabuz

Lecturer at the University of the National Education Commission and Pontifical University in Cracow; Career Diplomat in the Ministry of Foreign Affairs of Poland; PhD Candidate at the Chemnitz University of Technology

1w

You might find it useful while analysing provisions of the AI Act on deep fakes and synthetic media: https://onlinelibrary.wiley.com/doi/10.1002/poi3.406 ("Deep fakes and the Artificial Intelligence Act—An important signal or a missed opportunity?" published recently in "Policy & Internet").

Like
Reply
Christos Makris, LL.M. Eur

Legal-Tech || AI Governance || Cybersecurity || Data Protection || Privacy Matters || Digital Rights

4w

Thanks for sharing Phil Lee 👏🏼🔝 I have to add that the line is OpenAi ➡️ Microsoft ➡️ LinkedIn

Like
Reply
Gabriel Bangura

MA Candidate in Applied Human Rights | Human Rights Advocate

1mo

This is remarkable! Platforms like LinkedIn are already using visual identifiers that are likely suggestive that a post was most probably made through Generative AI, even when the metadata of such a post is unavailable. I am curious to see how they implement this for text-based posts.

Kathleen Aguilar, FIP, CIPP/US/E, CIPT

Privacy Law | Data Innovation and Strategy | AI | Technology and Product Counsel

1mo

Thank you! (I was very much wondering in the last couple of days how this would work - and then you must have read our collective minds (I’m not the only one wondering) and magically posted this. You are a Godsend).

Kevin F.

Manager @EY Consulting - Digital Risk | Responsible AI & Robotics - Emerging Technologies

1mo
Jax Harrison

Where technology meets the human need. Founder and CEO - The Future Found. Board Advisor, Director, Speaker, HumanRights.

1mo

It's an important step to proving #onlinesafety

Constantine Kurianoff

Cybersecurity, IT/AI GRC Pro | CISSP, CCSP | 4x Founder, 1x Exit, 1x Fail, 1x In-Flight

1mo
Like
Reply
Ryan Gutierrez

Community Programs Specialist I Chief People Officer

1mo

Angeline Corvaglia have you seen this label process yet?

See more comments

To view or add a comment, sign in

Explore topics