The use of artificial intelligence continues to grow across multiple sectors, including within the world of journalism. Some are excited about this new technology. Others still have many questions about the ethics of using AI and its place in the workforce.

But AI is not be likely to disappear anytime soon, leading several news organizations to coming together to determine the best ways to navigate the ethical considerations and dilemmas of using AI in their routine work.

AI Use in the Newsroom

According to data collected between April and July 2023 by JournalismAI, more than 75% of media outlets use AI in some capacity either for news gathering, production or distribution. The survey included 105 news organizations from 46 different countries. About 73% of those surveyed said they believed AI applications are a resource for new opportunities.

One Radio Television Digital News Association article explains, “AI intersects with core journalism principles like accuracy, context, trust, and transparency. Carefully weigh all issues before integrating into your news organization.”

The organization suggests there are many ways AI has been used in newsrooms, including for audio, video, photos, and word modifications. While these tools can be helpful, they also might cause some issues. The content produced by AI may be confusing or nonfactual. Additionally, there may be ethical issues with the organization's transparency, which can damage the credibility of the outlet.

AI has been used for specific content creation, but has also been used to enhance features on journalism-based websites. AI algorithms, for example, have had a large impact on news sites. Some AI tools allow journalists to look at the analytics gathered on their stories based on how many times their story has been read, how long readers stayed on their story's page and can even conduct tests to see which headlines garner more clicks. How audiences interact with a reporter's story can offer useful suggestions for generating content based on which stories and headlines are predicted to be more successful.

The Risks of Using AI

However, there are legal and ethical implications to using AI in newsrooms. A recent Center for News, Technology and Innovation article showed that although using generative AI tools can boost productivity and growth, it can also increase the risk of inaccurate information, copyright abuse, ethical dilemmas and decrease public trust.

There are also other cautions to take with this assistive technology. When using chatbots, organizations will have to make sure information appearing in the automated responses is accurate, current and that the organizations using chatbots are being transparent about the usage of the technology. For recommended stories based on user-specific algorithms, news outlets will need to make sure AI isn't filtering out certain content and unintentionally creating user biases.

In recent years, the use of deep fakes has also become cause for concern. Due to past cases of shocking clips that were later found to be deep fakes, there has been some skepticism among media consumers of whether or not some video and audio evidence is legitimate.

When AI Gets It Wrong

It's important to note that AI is not infallible, and there have already been numerous instances in which AI has wrecked reputations, created widespread skepticism and eroded public trust in the media. Just this year, U.K. Labour Party leader Sir Keir Starmer made global headlines when a clip surfaced of him verbally abusing staff. It was later found to be a deep fake, but not before angering millions of people on X, formerly known as Twitter.

In another instance, AI wrote a story about a member of a girl group called Little Mix for MSN.com, but mistakenly posted a photo of the wrong band member — who was a different race — instead. The mistake shone a spotlight on whether AI can accurately detect faces with darker complexions. If AI is human-made technology, questions were also raised about the likelihood of it reflecting its makers' biases as well.

The Future of AI in the Newsroom

AI is becoming more common in newsrooms, and some are trying to find more ethical ways of utilizing the technology. Media outlets like the New York Times and NBC News are already having talks with other news outlets to come up with rules for using these tools, like the “Principles for Development and Governance of Generative AI.” The principles discuss intellectual property, transparency, accountability, fairness and safety — but these guidelines are likely just one set of many more to come as news organizations continue to look for ethical ways to incorporate AI into their daily work.

The General Consensus 

The Society of Professional Journalists' official code of ethics has four rules: “Seek Truth and Report It,” “Minimize Harm,” “Act Independently” and “Be Accountable and Transparent.” These rules apply to the use of AI in journalism, as well.

The overall theme of many of these articles is that AI usage in journalism has an endless range of pros and cons. It could be a helpful tool for content creation, improving algorithms, minimizing workloads, finding data and coming up with topic ideas. But it could also breach journalistic integrity, trust and ethical standards. The general consensus — for now — appears to be that AI is acceptable to use in journalism as long as it is done in moderation, and is fact-checked by real humans. Additionally, journalists utilizing this technology are urged to be transparent about the usage of AI in their work.

Some news organizations have already begun discussions about setting guidelines for AI usage in their everyday routines. AI may be a concern to a lot of people, but it's also unlikely to go away anytime soon.

Written by Emily M.

The link has been copied!