The Brave New World of AI in Journalism
An email from NPR this week announced that NPR is “actively engaged in developing a framework and set of principles to guide its decision-making on all aspects of AI (Artificial Intelligence) investment and usage.” The email went on to say that NPR would be consulting with experts across a wide range of areas, including editorial, legal, security and data governance, to evaluate how AI might be used at NPR and across the NPR Network.
Also, recently, the Radio Television Digital News Association (RTDNA) issued guidelines for the use of AI in journalism. According to RTDNA, these guidelines are among the first regarding AI from a national journalism organization. The RTDNA guidelines read more like a warning label than a set of standards or policy statement. In a section labeled “Accuracy, Context and Clarity” the RTDNA guidelines say that newsrooms should consider the following questions:
- Can you fully understand the capabilities and source material for the AI program before implementation?
- What are your safeguards to protect against inadvertent plagiarism?
- Can you independently verify the AI tool’s accuracy?
- Are there opportunities to test the AI tool prior to publication?
- What is your newsroom system and set of expectations for human review before publication?
AI has been used to produce works of journalism for some time. The Associated Press reports using AI since 2014 to generate automated data-driven text stories, covering topics such as financial reports and sports results. The AP says that thanks to AI it increased its output by a factor of 10 on corporate earnings stories for all publicly traded companies in the United States.
But, today’s AI is not the AI of yesterday. The current iteration of AI, called generative AI, is capable of scouring large volumes of data at high speed and synthesizing the information contained in that data into narrative text that mimics natural human language pretty well. This new capability, powered by sophisticated language models, now enables an AI chatbot to generate a relatively balanced article on a topic based on a set of parameters programmed into its code.
The implications of generative AI (and what comes after it) for journalism are far-reaching. On the bright side, journalists could use AI to mine data that was previously unmanageable, to develop deeper, fact-based stories. On the dark side, tech companies (many of which have now become media companies, think Google and Apple) will use AI to generate content cheaper and faster as a way to maximize profits, without the need to compensate the journalists who create the original source material from which AI stories are derived.
A recent article by Maggie Harrison on the website Futurism describes the danger of AI to the fragile journalism economy. The article’s leave-nothing-to-the-imagination title says it all, “Google Unveils Plan to Demolish the Journalism Industry Using AI.” Harrison describes a demo of a new AI-powered search interface recently unveiled by Google that puts at the top of every Google search an AI generated summary called an “AI snapshot.” Harrison maintains that because research shows that information consumers hardly ever make it to the second page of search results, or even to the bottom of the first page, this AI snapshot will be a game-changer that will crowd out original journalism and further destabilize the revenue streams that support it. Harrison writes: “… the demo raises an extremely important question for the future of the already-ravaged journalism industry: if Google's AI is going to mulch up original work and provide a distilled version of it to users at scale, without ever connecting them to the original work, how will publishers continue to monetize their work?”
It's a good question that deserves an answer. Our democracy may very well depend on it.