AI tools in news development are here to stay. They are powerful tools that also need robust policies to guide their ethical use.

If there is one technology that became very popular in 2023, it is generative artificial intelligence (AI) and chatbots. Breakthroughs in AI have led to the tool’s use in many fields, including education and journalism. In newsrooms, these tools have been pivotal in translating articles into different languages, proofreading, and crafting headlines, to mention a few. 

However, the use of AI in reporting, whether in print or online, has not been smooth because things have gone wrong before. There have been instances where writers have published articles with factual inaccuracies.

Another case is when reporters, not understanding that their craft is built on intellectual honesty, try to pass off AI-generated content as their own. To the keen eye, it is quite easy to spot such articles.

Other issues have also come up, and they are based on professional anxiety. Is AI cheaper than hiring human reporters? Does it make sense for newsrooms to save costs by using the technology in place of seasoned reporters? 

These are serious questions, but they do not have easy answers. 

Media companies acknowledge using AI tools widely

According to this report that interviewed 105 media companies across 45 countries, over 75% of participants use AI in news gathering, production, or distribution. About a third have or are developing an institutional AI strategy.

That’s not all; newsrooms vary in their AI approaches based on size, mission, and resources. Some focus on interoperability, others take a case-by-case approach, and certain organisations aim to build AI capacity in regions with low AI literacy.

Approximately a third feel their companies are prepared for the challenges of AI adoption.

The report adds, “There are concerns that AI will exacerbate sustainability challenges facing less-resourced newsrooms which are still finding their feet, in a highly digitised world and an increasingly AI-powered industry.”

It is an approach that will be adopted widely in the coming days considering the media business is attempting to save costs while keeping productivity high.  Eric Asuma, CEO of Kenya’s business publication Kenyan Wallstreet, told TechCabal, “Looking ahead to 2024, my prediction centres on the role of artificial intelligence in the evolution of new media. I foresee a shift in which innovative use of AI will become instrumental in enhancing newsrooms, particularly in discerning and interpreting trends, especially within the financial media space. We will be unveiling an exciting initiative in Q1 2024 along these lines.”

All serious media companies need to develop an AI policy

Reporters will continue using AI in newsrooms, but its use must be managed well. Following the launch of ChatGPT and other chatbots, more newsrooms, such as the Financial Times (FT), The Atlantic, and USA Today, have developed guidelines on how AI can be used in the news business. These policies have been put in place because media companies understand the importance of AI tools and would want to preserve journalistic ethics and values.

In its AI policies, broadcaster Bayerischer Rundfunk (BR) says it uses it to improve user experience by responsibly managing resources, improving efficiency, and generating new content. The company also contributes to discussions about the societal impact of algorithms while fostering open discourse on the role of public service media in a data society.

The BBC says it is dedicated to responsible advancements in AI and machine learning (ML) technology. “We believe that these technologies are going to transform the way we work and interact with the BBC’s audiences—whether it is revolutionising production tools, revitalising our archive, or helping audiences find relevant and fresh content via ML recommendation engines,” BBC clarified in its AI policy.

Others such as FT are pushing for honesty. “We will be transparent, within the FT and with our readers. All newsroom experimentation will be recorded in an internal register, including, to the extent possible, the use of third-party providers who may be using the tool,” FT says.

And why is this important? Well, setting up AI usage policies in media companies for news and story creation is key to maintaining transparency, upholding journalistic standards, and addressing potential biases. It ensures responsible and ethical deployment of AI technologies in the media industry.

Get the best African tech newsletters in your inbox