arrow Products
Glide CMS image Glide CMS image
Glide CMS arrow
The powerful intuitive headless CMS for busy content and editorial teams, bursting with features and sector insight. MACH architecture gives you business freedom.
Glide Go image Glide Go image
Glide Go arrow
Enterprise power at start-up speed. Glide Go is a pre-configured deployment of Glide CMS with hosting and front-end problems solved.
Glide Nexa image Glide Nexa image
Glide Nexa arrow
Audience authentication, entitlements, and preference management in one system designed for publishers and content businesses.
For your sector arrow arrow
Media & Entertainment
arrow arrow
Built for any content to thrive, whomever it's for. Get content out faster and do more with it.
Sports & Gaming
arrow arrow
Bring fans closer to their passions and deliver unrivalled audience experiences wherever they are.
Publishing
arrow arrow
Tailored to the unique needs of publishing so you can fully focus on audiences and content success.
For your role arrow arrow
Technology
arrow arrow
Unlock resources and budget with low-code & no-code solutions to do so much more.
Editorial & Content
arrow arrow
Make content of higher quality quicker, and target it with pinpoint accuracy at the right audiences.
Developers
arrow arrow
MACH architecture lets you kickstart development, leveraging vast native functionality and top-tier support.
Commercial & Marketing
arrow arrow
Speedrun ideas into products, accelerate ROI, convert interest, and own the conversation.
Technology Partners arrow arrow
Explore Glide's world-class technology partners and integrations.
Solution Partners arrow arrow
For workflow guidance, SEO, digital transformation, data & analytics, and design, tap into Glide's solution partners and sector experts.
Industry Insights arrow arrow
News
arrow arrow
News from inside our world, about Glide Publishing Platform, our customers, and other cool things.
Comment
arrow arrow
Insight and comment about the things which make content and publishing better - or sometimes worse.
Expert Guides
arrow arrow
Essential insights and helpful resources from industry veterans, and your gateway to CMS and Glide mastery.
Newsletter
arrow arrow
The Content Aware weekly newsletter, with news and comment every Thursday.
Knowledge arrow arrow
Customer Support
arrow arrow
Learn more about the unrivalled customer support from the team at Glide.
Documentation
arrow arrow
User Guides and Technical Documentation for Glide Publishing Platform headless CMS, Glide Go, and Glide Nexa.
Developer Experience
arrow arrow
Learn more about using Glide headless CMS, Glide Go, and Glide Nexa identity management.

Skepticism, legislation concerns are blocking AI adoption by news companies

Media companies are in a rush to embrace AI, but are often paralysed with indecision about how to adopt it. Here are two key questions to answer when contemplating how to use AI: Will your readership recoil if they learn you use AI? And will you risk the jobs, or worse, of your company's leadership?

by Rich Fairbairn
Published: 13:49, 02 December 2024

Last updated: 13:51, 02 December 2024
Glide Publishing Platform, Glide CMS, Glide Go, and Glide Nexa are a suite of products which help publishers and media bring audiences and content together.

This article first appeared on the International News Media Association's (INMA) Content Solutions blog.

Pumping out AI content crammed with SEO keywords and burgled ideas is depressingly easy. And using it to game algorithms and make programmatic cash looks equally simple. So why aren't more publishers doing it?

I had this very conversation with a few AI start-ups and technologists recently, who are all aware of the capability to use AI to conjure content from nothing and use it to farm for clicks. But why, they wondered, are publishers so against using it? Isn't instant free content just what publishers want?

Image credit: Glenn Gabe

Image credit: Glenn Gabe

Setting aside the distaste for their idea of what AI content could be, I felt it was worth walking them through the challenges publishers face which are not shared by, say, social media firms. More than one insisted to me that the part of U.S. Section 230 absolving social media firms from liability for content published on their platforms was relevant to publishers in Europe, for example.

The things blocking publishers from using AI have nothing to do with lack of capability or understanding of the tech.

It's because of things at the core of what being a publisher is. And, arguably, what marks the line between being a serious media organisation and an opportunistic site letting AI loose on its audiences without being aware of the risks.

Reader reaction still the main barrier

In meetings with publishers and media owners, they say a top concern preventing the use of AI for anything more than workflow efficiencies is readership perception: Right now, it is low quality, and it cannot be trusted. And, the first people to spot weaknesses are those whose job it is to write content in the first place. 

While the public may be optimistic about AI in areas such as healthcare and sciences, it has very low opinions of AI-generated content in journalism.

There are now a raft of studies on AI in journalism, the largest being Richard Fletcher's and Rasmus Kleis Nielsen's work for Reuters Institute/Oxford University, which surveyed more than 12,000 people in six countries on the question of AI in news. I thoroughly recommend studying it; the relevant takeaway for publishers is that AI in news is not trusted by readers - especially not to generate news. That opinion cuts right to the heart of audience trust.

Even if AI content was not prone to fiction, as long as readers think it is ick, why would any publisher unilaterally take up the mantle of challenging that view?

The tide has turned in search

Skepticism around AI content has risen further since publisher data specialists studied the turbulent journey of start-up sites using machine-generated content to game the search systems and gain meteoric visibility. 

SEO expert Glenn Gabe of G-Squared Interactive, presenting at the recent NESS News and SEO Summit for news media, explained his study of opportunistic sites which leaned on AI content: "Publishing tons of AI content without human involvement will work - until it doesn't," Gabe said.

Now Google has started enforcing its scaled content abuse policy and penalising sites stuffing pages with AI content. The fear of sending AI content to the front-end is seen as a major risk by publishers that have spent years building visibility and authority.

Ridicule vs. credibility

Media loves to read about itself, and media scandals outpace most others for being whispered about and shared by members of the sector. 

There have been multiple embarrassing moments for outlets appearing to lean on AI to write terrible articles or using fake headshots of non-existent authors. These have been leapt upon by the public and industry, and proved highly embarrassing. 

Titles like CNET and Sports Illustrated found out their AI content isn't a good look, and Google drew scorn and laughter for wildly inaccurate facts generated by its AI tools. It's one of the main reasons our own AI tools do not work without human oversight.

Personal and corporate liability

In a presentation I recently made to publishers in Spain, the opening slide is titled "AIs don't go to jail... humans do." This is a concept that had not figured much at all into the thinking of the AI firms we spoke with.

Tech platforms pay little attention to content liability. Rightly or wrongly, most believe they work under the umbrella of Section 230 of the U.S. Communications Act and thus don't need to worry too much about what their machines generate, and they tend not to consider your liability for content at all.

This may change under proposed changes to the laws in the United States, to Section 230 with regard to "publishing" content, and also to the liability of AI firms for outcomes their systems enable. In the European Union, this comes in the form of the EU AI Act, which is the first serious legislative package to bring AI decisions on content into play. It is also forthcoming U.K. legislation.

For now, your off-the-shelf AI tools will happily generate libelous or contemptuous content, in breach of laws in jurisdiction worldwide. And, it is you or your colleagues who will be on the hook for it.

Funnily enough, publishers aren't too comfortable with that.

Intellectual property (IP) ownership and data transfer

Publishers quickly realised their content is an asset that AI tools should not use without reward. The nature of the reward is for publishers to decide by legal challenge or negotiation, but turning up with an AI toolset and telling publishers their content will be sent to random AI tools is already becoming a hard no scenario.

Aside from the transfer of their own assets, there are often vast troves of private and proprietary data within workflows which publishers may not have the right to send to an AI. This includes customer-related content, supplied content like images or data, or other content supplied under its own negotiated rights.

Any AI tool should default to bring a closed model that does not pass any data back to the "mothership" for training. That's how we deploy it within our tools for publishers - with data being escrowed. Additionally, it can be turned off fully without hindering workflows.

Private language-learning models (LLMs) are being explored by a rapidly growing segment of media and publishing companies. This is happening in the storm term as a solution to IP leakage and transfer concerns, and in the longer term to prepare for a time when ownership of their own AI becomes more common and a potential revenue stream in its own right. 

Fear of being trapped by one provider

The dominant AI supplier behind many off-the-shelf connections is OpenAI. For now, this is a far ahead in adoption compared to any other single LLM. 

While this can result in its own problems (OpenAI is hardly the most popular figure in the world of publishing), even setting the brand aside, the scenario is ringing alarm bells among technologists who do not want to repeat the "Google experience." In other words, there is a concern of one tech giant holding all the reins. This is at the core of why we offer LLM choice - 21 at present.

Not all AIs are going to be here in a year or two, so building in flexibility is required.

Legislation

Up until now, the barrier talked about the least (but racing up the priority list and perhaps destined to become the biggest barrier of all), are the requirements for AI companies and those using the tech to adhere to new and fast-evolving legislation. 

Our advice is to prepare in a manner that treats AI regulations as being as impactful to your business as General Data Protect Regulation (GDPR). You need to understand what your obligations are and what your exposure is, and be prepared to amend your workflows and technology usage. You will need to look at everything from your content deals to employment contracts.

The primary rule set you can refer to already is in the EU AI Act, which comes into force over the next 21 months and will heap new requirements on firms using AI. The United Kingdom is also considering new AI rules, as are U.S. federal legislators - and there may be some spate-specific U.S. regulations too.

The top concern for publishers is the suggestion they will need to signal to readers any instance of AI content,  or any use of AI in content creation. That could have a huge technology impact if it becomes the standard. 

The key word is "could." Why don't we know definitively? Because the text of the Act still has a way to go in defining what is classified as a qualifying AI system, where the lines will be drawn, and what will be amended between now and enforcement of the Act in mid-2026.

On crucial paragraph considering "transparency obligations for providers and deployers of certain AI systems" says: "Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video, or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."

Also noteworthy in the text: "Deployers of an AI system that generates or manipulates image, audio, or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated." 

There are carve outs for AI tools being used in the editing of content versus its creation, but lots of the Act text is still quite woolly and presumes industry-specific codes of conduct will do much of the heavy lifting by detailing the exact interpretation of the Act at implementation level. 

However, you look at it, looking at how GDPR panned out, one should assume "protection of the individual" will trump all. That would mean an obligation to be able to tell a reader if they are reading AI content or not.