AI poses new threats to newsrooms, and they’re taking action

menu icon

  • The New York Times and NBC News are among the media companies that have started preliminary discussions about potential protections against generative AI systems.
  • Digital Content Next, the digital media trade organization, this week released seven principles of generative AI to help guide the discussion.
  • “It’s the beginning of what’s going to be hellfire,” Axios CEO Jim VandeHei said in an interview.

People walk past the New York Times building in New York City.

Andrew Burton | Getty Images

Newsroom leaders are bracing for chaos as they consider guardrails to protect their content from AI-driven aggregation and disinformation.

The New York Times and NBC News are among organizations holding preliminary talks with other media companies, major tech platforms and Digital Content Next, the industry’s digital news trade organization, to develop rules about how their content can be used from natural language AI tools, according to people familiar with the matter.

The latest trend in generative AI can create seemingly new blocks of text or images in response to complex questions like “Write an earnings report in the style of the poet Robert Frost” or “Draw a picture of the iPhone as rendered by Vincent Van Gogh”.

Some of these generative AI programs, such as Open AI’s ChatGPT and Google’s Bard, are trained on vast amounts of publicly available information from the Internet, including journalism and copyrighted art. In some cases, the generated material is actually taken almost verbatim from these sources.

Publishers fear these programs could undermine their business models by publishing repurposed content without credit and creating an explosion of inaccurate or misleading content, diminishing trust in online news.

Digital Content Next, which represents more than 50 of the largest U.S. media organizations including The Washington Post and The Wall Street Journal, parent News Corp., this week released seven principles for “Generative AI Development and Governance.” They address issues of security, intellectual property redress, transparency, accountability, and fairness.

The principles are meant to be an avenue for future discussions. They include: “Publishers have the right to negotiate and receive fair compensation for the use of their intellectual property” and “Implementators of JHA systems should be held accountable for the results of the system” rather than the rules that define the industry. Digital Content Next shared the principles with its board of directors and relevant committees on Monday.

News outlets vie for artificial intelligence

Digital Content Next’s “Principles for Generative AI Development and Governance”:

  1. GAI developers and distributors must respect the rights of creators in their content.
  2. Publishers have the right to negotiate and receive fair compensation for the use of their intellectual property.
  3. Copyright laws protect content creators from unlicensed use of their content.
  4. JHA systems should be transparent to publishers and users.
  5. Users of JHA systems should be held accountable for the outputs of the system.
  6. JHA schemes should not create, or risk creating, unfair market or competitive outcomes.
  7. GAI systems should be secure and address privacy risks.

The urgency behind creating a system of rules and standards for Generative AI is intense, said Jason Kint, CEO of Digital Content Next.

“I’ve never seen anything go from an emerging issue to dominating so many workflows in my time as CEO,” said Kint, who has led Digital Content Next since 2014. “We’ve had 15 meetings since February. average”.

How generative AI will develop in the coming months and years is dominating the conversation in the media, said Jim VandeHei, CEO of Axios.

“Four months ago, I wasn’t thinking or talking about AI. Now, that’s all we talk about,” VandeHei said. “If you own a business and AI isn’t something you’re obsessed with, you’re crazy.”

Generative AI presents both potential efficiencies and threats to the news business. Technology can create new content such as games, travel lists and recipes that benefit consumers and help reduce costs.

But the media industry is equally concerned about AI threats. Digital media companies have seen their business models falter in recent years as social media and search companies, primarily Google and Facebook, have reaped the benefits of digital advertising. Vice filed for bankruptcy last month and shares of news site BuzzFeed have been trading below $1 for more than 30 days and the company received a delisting notice from the Nasdaq stock market.

Against this backdrop, media leaders like IAC President Barry Diller and News Corp. CEO Robert Thomson are pushing Big Tech companies to pay for any content they use to train AI models.

“I am still amazed that so many media companies, some of them now fatally hidden below the waterline, have been reluctant to support their journalism or reform of an obviously dysfunctional digital advertising market,” Thomson said during the his opening remarks at the International News Media Association’s World Congress of News Media will be held in New York on May 25.

During an April Semafor conference in New York, Diller said the news industry must unite to demand payment or threaten to sue under copyright law, sooner rather than later.

“What you have to do is get the industry to say you can’t scrape our content until you come up with systems where the publisher gets an avenue for payment,” Diller said. “If you really take those [AI] systems and you don’t connect them to a process where there is a way to be compensated, all will be lost”.

Beyond budget concerns, AI’s most important concern for news organizations is alerting users to what’s real and what’s not.

“Broadly speaking, they are optimistic about this as a technology for us, with the big caveat that technology poses huge risks to journalism when it comes to verifying the authenticity of content,” said Chris Berend, head of digital at NBC News Group, who added that expects AI to work alongside humans in the newsroom rather than replace them.

There are already signs of AI’s potential to spread disinformation. Last month, a verified Twitter account called the “Bloomberg Feed” tweeted a fake photograph of an explosion at the Pentagon outside Washington, D.C. Although this photo was quickly debunked as fake, it led to a brief drop in stock prices. More advanced fakes could create even more confusion and cause unnecessary panic. They could also damage brands. “Bloomberg Feed” had nothing to do with the media company, Bloomberg LP.

“It’s the beginning of what’s going to be hellfire,” VandeHei said. “This country is going to see mass proliferation of mass garbage. Is it real or isn’t it real? Add that to a society that is already thinking about what is real or not real.”

The US government could regulate Big Tech’s development of artificial intelligence, but the pace of regulation will likely retard how quickly the technology is used, VandeHei said.

This country will see a mass proliferation of mass garbage. Is it real or is it not real? Add this to a society that already thinks about what is real or not real.

Jim VandeHei

CEO of Axios

Tech companies and newsrooms are working to combat potentially destructive AI, like a recently made-up photo of Pope Francis wearing a big puffer jacket. Google said last month that it will encode information that will allow users to decipher whether an image is made with artificial intelligence.

Disney’s ABC News “already has a team that works around the clock, checking online videos for veracity,” said Chris Looft, lead producer, visual verification, at ABC News.

“Even with AI tools or AI models that work in text like ChatGPT, it doesn’t change the fact that we’re already doing this work,” Looft said. “The process remains the same, combine reporting with visual techniques to confirm the veracity of the video. That means picking up the phone and talking to eyewitnesses or analyzing metadata.”

Ironically, one of the first uses of AI to take over human work in newsroom may be the fight against AI itself. NBC News’ Berend predicts that there will be an “AI that controls artificial intelligence” arms race in the coming years, as both media and tech companies invest in software that can correctly classify and label the true from false.

“Combating disinformation is a matter of computing power,” Berend said. “One of the central challenges when it comes to content verification is technological. It’s such a huge challenge that it needs to be addressed through partnership.”

The confluence of powerful, rapidly evolving technology, input from dozens of major companies, and US government regulation has led some media executives to privately acknowledge that the coming months could be very messy. The hope is that today’s age of digital maturity can help find solutions faster than in the early days of the internet.

Disclosure: NBCUniversal is the parent company of NBC News Group, which includes both NBC News and CNBC.

WATCH: We need to regulate generative AI

#poses #threats #newsrooms #theyre #action

Leave a Reply

Your email address will not be published. Required fields are marked *