Artificial intelligence (AI) is the way of the world now. It is gaining use and popularity in just about every aspect of our lives. From medical advancements to entertainment, it is reshaping society even as it battles controversy and skepticism. For one thing, this emerging technology does not have enough oversight, and no one knows for sure what the long-term repercussions might be. For another, AI makes it easy to spread disinformation, further eroding the trust people have in the news. So, why are so many news outlets turning to artificial intelligence to deliver their information?
News Outlets Turning to Artificial Intelligence?
News isn’t what it used to be. No longer are we experiencing great reporting by the likes of Walter Cronkite. On the other hand, we don’t have to wait for the paper to be delivered — on the porch, it is hoped, instead of in the middle of the sprinkler — or to watch the nightly broadcast to get our fix. Technology can be a beautiful thing … sometimes.
This new age, though, comes with serious challenges for the media. Since most information today is available digitally for free, advertising revenues have declined. People aren’t buying subscriptions as much, and why would they when they can get the information for free elsewhere? The trend is getting the news out there as fast as possible, no matter if it’s incomplete or even contains ambiguous or false information. The idea is that mistakes can be fixed later; being the first to report on a breaking story is more important.
The United Nations suggested that “these pressures can lead to editorial compromises, such as prioritising trending content over public-interest reporting or aligning coverage with political interests.” And the inclusion of artificial intelligence can make matters worse. “Automation could potentially replace not only reporters, but also designers, editors, and distribution staff,” the outlet warned. “With fewer journalists on the ground, we risk losing investigative reporting, local news coverage, and the rich storytelling that defines journalism.”
AI can be beneficial in news production, such as text-to-audio players that “read” the information for viewers. Artificial intelligence programs can also be used to generate leads or ideas, but everything should be overseen and checked by humans because the algorithms and programs can get things wrong. Still, media companies are integrating more and more AI.
According to an internal memo shared with Semafor, Fortune is ramping up its artificial intelligence usage. The company said it is bringing back a former editor, Nick Lichtenberg, to “test ways to use AI to deliver breaking news faster.” Semafor reported that the company has a new section called Fortune Intelligence, which is, “essentially, stories co-written with chatbots.” The internal memo repeatedly stressed that “human oversight is required at every stage before publication.” Fortune’s goal, according to its memo, is to become a “site of record” and a daily habit for readers. AI will also be used to generate graphics and even turn one of the company’s newsletters into a podcast. “We intend to surf this wave, not get pummeled by it,” EIC Alyson Shontell wrote about the increase of AI use in media.
Axios used to have a policy that required all content to “be written or produced by a real person with a real identity,” but the outlet is now relaxing that requirement. “That was written years ago and was unnecessarily limiting as we’ve learned more about what AI tools can and can’t do,” the new guidance explains.
The Associated Press is also incorporating technology tools to cover their stories. According to its website, “The Associated Press is at the forefront of leveraging artificial intelligence to shape the future of news. At AP, we’re exploring with AI to see how the technology might streamline news production and enhance editorial efficiency.” In addition, “AP looks for ways to carefully deploy artificial intelligence in areas where we can be more efficient and effective, including news gathering, the production process and how we distribute news to our customers.”
However, as Digital Context Next pointed out, “Instead of relying on human judgment, experience, or what’s best for the public, news decisions are now more focused on numbers, data and technology.” Furthermore:
“Editors are using tools like algorithms and performance stats to decide what news to publish rather than just their instincts or values. This raises concerns that news is becoming too uniform, that editors have less freedom, and that journalism may not serve democracy as well as it used to.”
Consider the recent Coldplay blowup. A man was caught stepping out on his wife, and social media propelled the issue to a top story. Can you see Cronkite or other reputable news anchors making this their top investigative piece? But the number of viewers and hits caught the attention of the media everywhere, and everyone and their brother seemed to write a story on it.
Choosing what is and isn’t news is called gatekeeping, which views “news production as a series of choices, like gates, where people decide what becomes news,” Digital Content Next explained. “Digital tools and social media platforms now play a significant role in shaping those decisions.”
Let’s talk for a moment about trust. An October 2024 Gallup Poll found, “For the third consecutive year, more U.S. adults have no trust at all in the media (36%) than trust it a great deal or fair amount. Another 33% of Americans express ‘not very much’ confidence.” This doesn’t mean people trust artificial intelligence more, though; especially with the deepfake news reports and scams that flood the internet and social media.
When artificial intelligence can be used to impersonate reporters or show videos and images of events that didn’t really happen, it’s hard to know what’s real and what’s fake. In 2023, social media went into panic mode when an image showed an explosion near the Pentagon. This caused a dip in the stock market until the fake was debunked. In 2022, a video circled the internet showing Ukrainian President Volodymyr Zelensky telling his soldiers to lay down their arms and surrender to Russia. Just before the New Hampshire presidential primary in 2024, some voters received robocalls that had then-President Joe Biden’s voice saying, “Stay home and save your vote for the November election.”
Deepfakes like these, and many others, should serve as a cautionary tale for viewers and media outlets anxious to incorporate artificial intelligence into their productions. While AI may streamline some work, nothing can replace solid investigative reporting and a thoughtful human approach required of a reporter.