Opinion – Urgent need to integrate AI in newsrooms

Opinion – Urgent need to integrate AI in newsrooms


The latest technology to disrupt newsroom operations is arguably artificial intelligence. Taking the form of generative AI, this technology has allowed machines to literally create, act and process prompts and information like human beings.

Because of its ability to do tasks autonomously like human beings, there is deep seated fear that this technology will replace human beings (and, by extension, journalists). 

While this fear cannot be ignored at face value, it is important to note that AI is here to stay, and newsrooms need to be more proactive and strategic in how they harness its opportunities while putting guardrails on how to mitigate its threats.

Newsrooms cannot afford to bury their heads in the sand ostrich-style as this is the time for learning and unlearning as the latest digital transformation permeates the news-making ecosystem. 

A recent report by the Namibia Media Trust (NMT) reveals how the use of artificial intelligence (AI) in Namibian newsrooms has become more rampant, albeit in an individualised and non-strategic manner.

With its main focus on Namibia, the study shows  a striking 73% of journalists in the country are utilising AI chatbots for various aspects of their work, including text editing, transcription, news gathering, and content generation.

This trend underscores the urgent need for media organisations not just in Namibia but across the southern African region to strategically adopt AI in their newsroom workflows and operations. This also means adopting responsible and ethical guidelines on how to integrate AI into their news production, distribution, monetisation, consumption and measurement processes.

Recently we have seen how the ad hoc adoption of generative AI tools such as DALL-E and Midjourney and Large Language Models (LLMs) like ChatGPT and Bard have reshaped how news is created, packaged, consumed and monetised. In May 2023, WAN-IFRA conducted a survey, which found that 49% of the respondents had already used generative AI tools. Salesforce, which owns the popular workplace tool Slack, also conducted a survey with 14 000 workers in 14 countries and discovered that 28% of these employees use generative AI in their workplaces, with more than half doing so without official endorsement from their employers.

For media practitioners, AI, like other technologies before it, is double-edged. It has both positive and negative attributes. It depends largely on how it is adopted and adapted to the newsroom context. This is with consideration of the fact that, in an age where information travels at lightning speed and the demand for timely, accurate reporting has never been greater, the integration of AI into journalistic practices is no longer a futuristic concept but a pressing necessity. 

Southern Africa’s media industry faces numerous challenges, including dwindling resources, the proliferation of misinformation, and a rapidly changing digital environment. Traditional methods of news gathering and reporting are increasingly becoming inadequate for meeting the needs of a tech-savvy audience that expects real-time updates and personalised content. 

Newsrooms must, thus, adapt or risk obsolescence. AI technologies can significantly enhance the efficiency and productivity of media organisations. By automating routine tasks such as transcription, translation, and text editing, journalists and editors can devote more time to in-depth reporting and investigative journalism. This not only improves the quality of content but also allows for a more thorough exploration of pressing issues facing society.

While the potential benefits of AI are many and varied, the unskilled use of these technologies poses significant risks to journalistic integrity. Media organisations need to strategically adopt AI, training their staff on the ethical implications and potential pitfalls of its use. This includes understanding how to discern credible information from disinformation and misinformation, maintaining transparency in AI-generated content, and safeguarding against biases that may arise from relying too heavily on AI tools and systems.

By imparting relevant skills through on- and off-the-job training programmes, news organisations can empower journalists to use AI responsibly, securely and safely, upholding the standards of accuracy and fairness that are foundational to the profession. Some of these recommendations have already started. For instance, last year, Namibia’s Ministry of Higher Education, Training and Innovation, in collaboration with UNESCO, gathered more than 80 professionals from government, academia, civil society, and the private sector in Windhoek for a stakeholder engagement on the implementation of UNESCO’s recommendation on the ethics of AI.

These recommendations emphasise the need for transparency, accountability, and inclusivity in the development and use of AI. It also calls for the protection of human rights and the promotion of gender equality and diversity in AI development.

The engagement followed the September hosting of the Southern Africa Regional Forum on Artificial Intelligence (SARFAI) in Namibia. The forum served as a unique opportunity for deliberation among Southern African countries on how to leverage synergies and shape a shared agenda for the development and use of AI for the common good, with strong and clear ethical and human-rights-based foundations in line with the 2021 UNESCO Recommendation on Ethics of AI.

In order to implement these recommendations, a readiness assessment methodology (RAM) needs to be created to help member states and their stakeholders figure out what frameworks, capacities, and competencies are in place at the country level to support the responsible use and growth of AI.

Notwithstanding these efforts, strategic adoption of AI also involves fostering a culture of continuous learning within media organisations. As AI technologies evolve, so too must the skill set of the journalists who use them. Investing in training programmes that focus on both the technical aspects of AI and its ethical considerations will prepare journalists to navigate the complexities of digital and AI-enhanced reporting. 

This proactive approach not only enhances individual capabilities but also strengthens the overall integrity and credibility of the media landscape in Namibia. The integration of AI into southern Africa’s media organisations is not just a trend; it is a necessity for media sustainability and survival in the digital age. By adopting AI strategically, media outlets can enhance their efficiency, support robust news gathering, and uphold journalistic integrity. 

As the landscape continues to evolve, it is imperative that news organisations invest in the hard and soft skills necessary to responsibly harness AI’s potential. 

This also means news organisations should invest in locally designed AI tools and systems that can support their digital public infrastructures.  Doing so will improve their workflows and operations in an environment where journalism is increasingly practiced online. Embracing AI thoughtfully will ensure that the region’s media remains a trusted source of information in the years to come. 

The time to do the right thing is now. Rome was not built in a day.

*Hilary Mare is a PhD candidate with the Department of Communication and Media at the University of Johannesburg. He has been in media management for over a decade.

*Professor Admire Mare is an associate professor and head of the department of communication and media at the University of Johannesburg. He is a thought leader on the ethical and responsible use of artificial intelligence in various domains including media, elections, higher education and workplaces.