AI + Automation Lab

Working with Artificial Intelligence AI Guidelines at Bavarian Broadcasting

Technology is never an end in itself at Bavarian Broadcasting. Instead, it must help us deliver on a higher purpose: to make good journalism. This purpose-driven use of technology guides our use of artificial intelligence and all other forms of automation. These guidelines inform our choices if and how we want to use AI and Automation.

Von: Katharina Brunner, Rebecca Ciesielski, Philipp Gawlik, Uli Köppen, Steffen Kühne, Jörg Pfeiffer, Cécile Schneider

Stand: 16.07.2024

Frauengesicht mit Lichtprojektion | Picture: BR

At Bavarian Broadcasting, we aim to foster cooperation between humans and Artificial Intelligence, enriching our journalism with new technical capabilities. Therefore, we ask ourselves before employing new technology: Does this offer a tangible benefit to our users and employees at BR? 

Our employees at BR are irreplaceable. Technology will augment our work and shape new roles and responsibilities in all departments. To answer the question of benefit time and again in the face of a rapidly evolving technology, we gave ourselves these core guidelines for our day-to-day use of AI and automation. 

How we understand AI 

The term AI („Artificial Intelligence“) can have different meanings depending on the context. AI can be understood as a set of computer features mimicking intelligent behavior. Yet, it does neither define the meaning of „intelligence“ nor the technical methods in concrete terms. For practical reasons, we limit our definition to computer systems being trained to fulfill a specific task. This area of computer science and software development is called „machine learning“. 

We extend these guidelines to algorithmic decision-making systems in general, as technical foundation and social impact are similar. 

Our Guidelines for the use of AI and Automation: 

1. User Benefit  

We demand proven benefits for our users and workflows when using AI systems. We deploy AI to help us use the resources that our contributors entrust us with more responsibly by making our work more efficient. We also use AI to set up new content, develop new methods for investigative journalism, support our workflows and improve our products. 

Our critical reporting on Artificial Intelligence (Algorithmic Accountability Reporting) is backed up with our team‘s learnings from developing and using AI. We participate in the debate on the societal impact of algorithms by providing information on emerging trends, investigating algorithms, explaining how technologies work and strengthening an open debate on the future role of public service media in our society. 

2. Accurate Representation of AI 

We describe AI as technical systems and avoid using misleading anthropomorphic wording. Drawing analogies between AI functions and human intelligence and skills like reading, writing, or thinking is likely inaccurate and presents technology as overly powerful. Metaphors and imagery can reinforce the delusive impression of artificial beings. Therefore, we avoid using humanized images and descriptions in our publications.  

3. Editorial Control & Transparency 

The principle of editorial control remains mandatory with automated content. This means that only human individuals and editorial teams can be responsible for content, never systems helping to create the content.   We verify data sources and thoroughly check models and software for reliability. We set up customized human workflows and technical controls for the technology we use. Results of generative AI systems have to be checked editorially beforehand. 

When using extensively generated content, we explain for our users what technologies we use, which risks and limits we may encounter, how data is used, and which editorial teams or partners are responsible for the content. We clearly label automatically generated contents and publish our technical approach.  

4. Impact Assessment 

Before we deploy AI, we analyze possible outcomes and side effects, including compliance with editorial standards, current law, and foreseeable consequences for our staff. Training and running large AI models is very energy-consuming, so we factor this challenge into our choices. 

Where possible, we make our work accessible and use open-source software. When using third-party AI systems, we audit their performance and limitations. We also include ethical considerations and impact assessment in our decision-making process for third-party software.  

5. Diversity & Regional Focus 

We embark on new projects conscious of societal diversity, striving to set up our teams as diversely as possible. AI gives us leverage to create more inclusive and accessible content.  Furthermore, we strive towards consciousness and transparency in handling possibly discriminating stereotypes in training data. For instance, we are working on language models processing regional dialects. 

6. Conscious Data Culture 

It is a crucial question for us which data set was used to train a particular AI model. If vendors cannot supply solid information regarding their data sources, we will consider using such technology very carefully. We always prefer transparent systems. Correspondingly, we strive for integrity and quality of training data in all in-house developments. We continually raise awareness amongst our employees that only reliable data within the framework of our data protection regulations can produce reliable AI applications. 

When using voice, facial expression, body language, and similar forms of human communication to train AI models, we acknowledge our special responsibility in this field. 

7. Responsible Personalization 

Personalization can strengthen our media services’  information and entertainment value , so long as it does not undermine communication and solidarity in our society, and prevents unintended filter bubble effects.  

Hence, we actively collaborate to develop public-service recommendation algorithms. 

8. Culture of Learning 

We need experience and learning from pilot projects and prototypes to continuously improve our products and guidelines. We evaluate new technology in secured sandbox environments. This helps us learn while minimizing risks. That way, we ensure that our final product offering fulfills our standards, while still encouraging a culture of learning and experimentation in our day-to-day work.  

We regularly reflect on our work and ethical edge cases of AI technologies in interdisciplinary teams consisting of journalists, developers, AI experts, and management. 

9. Network & Partnerships 

We provide students and faculty at universities access to the day-to-day work of a large media house for their research and collaborate with academia and industry to run experiments, for example, with machine learning models and text generation. We exchange ideas with research institutions and ethics experts. We work with established tech companies and regional start-ups to use existing AI expertise and foster hands-on projects in the media sphere. 

 

First version of BR AI guidelines published 30.11.2020 Format: PDF Größe: 733,6 KB