AI Ethics in the Newsroom: Building for the World We Want
In the last session of the MBA course I teach at the University of Virginia, “Technology and Ethics,” I ask my students to imagine what they want the world to look like in twenty years – for themselves, their families, and their communities. Then I ask them to imagine what technology will look like in twenty years.
For many students, there is a disconnect. The systems and incentives they imagine technology will create are in tension with the world they want for their children. One parent from this year’s course mentioned that if tomorrow’s social media’s business models continue to rely on advertising, today’s issues related to attention and well-being will become even more pervasive as our lives move more online. Within that misalignment lies a call to action – for students to go out into the world and build products that serve human flourishing rather than simply chasing the coolest technology or what will make them rich.
As Bill Gates once said, “Most people overestimate what they can do in one year and underestimate what they can do in ten years.” Given the exponentially accelerating pace of generative AI, Gates’s words are more true now than ever. The decisions we make now will shape the world our children and grandchildren inhabit.
Media companies, in particular, have a vital role to play. As they confront a future driven by generative AI – new competitors, writing companions, news delivery formats, or perhaps all the above – they need to ask themselves a version of the questions I ask my students: what do they want newsgathering, news distribution, and news consumption to look like in twenty years?
And, is their industry headed in a direction to support the world they want?
Based on their recent statements, if you were to ask news executives the first question, many of them would likely answer with some combination of the following: they would want as many people to be informed by fact-based journalism as possible. They would want news to be a trusted institution at the center of global democracy, holding power to account. They would want a thriving local news ecosystem so people could know about what’s going on around them.
Top AI ML News: Qvest’s NAB 2024 Insights Highlight the Need for Quality AI Translation Solutions
However, these executives would also acknowledge that our industry is headed in the wrong direction to support their desired future.. Today’s media environment features the “choose your own adventure” world of social media, declining trust in societal institutions, and mis and disinformation. Media companies are incentivized to compete in a race-to-the-bottom competition for attention, which forces them to make choices that further entrench media’s dysfunction. If those incentives continue, the future, driven by AI, will be a worse version of today.
However, newsrooms are far from powerless as they confront the future. Leaders at media companies have the agency to shape the industry in a way that closes the gap between their ideals and where our industry might be headed.
With regard to AI, the first step media companies should take if they want to build a better future for the industry: develop a set of ethical principles. Building these AI principles is a three-step process.
First, media organizations should undertake the thought exercise my students went through: imagining what the future of the media ecosystem should look like.
Second, they should brainstorm the worst possible outcomes – what tomorrow’s media ecosystem will look like if generative AI exacerbates the issues of the current one.
Finally, leaders can set goals towards the world they want in Step #1 according to ethical principles designed to mitigate the negative outcomes they identify in Step #2. Based on those principles, organizations can decide what applications of generative AI are in versus out.
Some real-world examples from my media industry colleagues of this three-step process in action:
There will be times when companies will want to supplement human reporting with AI summaries. However, there is a risk that the blending of these two types of content will erode readers’ trust in reporting and obscure where source material comes from. Two ethical principles that would mitigate this harm: mandating disclaimers so that readers know what content comes from AI; and citing sources in AI summaries so that readers can fact-check on their own. Not only will readers be able to understand where reporting comes from, but also publishers can gain credit – and monetary compensation – for their work.
Another: AI eventually will be able to write certain types of stories. However, if newsrooms go too far, they will neglect to obtain new information or ask tough questions the way only shoe-leather reporters can. A potential ethical principle: articulating the degree to which humans need to be in the loop for different types of stories. A high school football recap might be written predominantly by AI; the latest political corruption scandal should be covered by a human.
A final one: generative AI can help fact-check stories because of its ability to access virtually all the data ever published on the internet. However, algorithms might hallucinate, or, perhaps worse, insert their own inherent biases into their interpretation of an event. Some ethical principles here could involve ensuring that all editors understand the different biases of the AI models they use for fact-checking and that every story is reviewed by a human before it goes out the door.
Will newsrooms’ ethical principles be universal? No, both because different organizations have different goals (the New York Times is not Google is not The Baltimore Sun) and because there are no moral absolutes here. Nevertheless, if organizations start from a place of moral clarity, they will be able to identify what applications of generative AI are fair game.
Amidst the catastrophizing going on about generating AI’s potential risks, the technology also has tremendous potential for newsrooms. If media organizations can adopt ethical principles to guide their work, publishers, readers, and democracies around the world stand to benefit.
News of the Week: Adobe’s New Frame.io Integrations With Photoshop And Workfront
Comments are closed.