Ready to get started?
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demoFar from breaking new ground with innovative AI tools, the mighty New York Times is showing that caution still wins the day when it comes to AI-generated content.
Eyebrows shot to chandeliers recently after it was revealed that the mighty New York Times is going “all-in on AI” for its newsroom and journalists.
With the NYT being arguably the most prominent litigant against AI firms for copyright theft, commenters highlighted the dilemma it faces in embracing AI tools.
Really though, regardless of the court action, its challenge of slipping AI into workflows is much the same as that faced by any media outlet grappling with the question of how to use AI safely. Industry standards and reader perceptions still take precedent in most cases, and the NYT example is no different.
As the most-read and (according to some surveys) most trusted newspaper in the world, one might think the NYT would launch into AI with a grand flourish of ground-breaking innovations.
In fact, quite the opposite: caution wins the day, and according to reports the acceptable use of AI in the newsroom is remarkably low stakes.
Humans in control
Semafor reported guidelines for NYT editorial staff include limiting the experimentation of approved AI tools for things like headline options, article summaries, research, suggested interview questions, compiling quizzes, writing social copy, FAQs, and example prompts such as:
Reporters of course still bear the burden of responsibility for anything published, and are told not to use AI “to draft or significantly revise an article, input third party copyrighted materials (particularly confidential source information), to circumvent a paywall, or publish machine-generated images or videos, except to demonstrate the technology and with proper labeling”.
Like many other titles, including many we work with, NYT chiefs have looked at the assortment of available AI tools and alighted on a similar level of AI use, with caution winning over boldness in the use of AI during content production.
In the NYT newsroom, guardrails are especially highlighted for content creation and information assimilation phases, with a ban on the use of some AI tools for certain tasks, such as for the upload of confidential documents supplied by a whistleblower - under the basis that the newspaper might no longer be able to guarantee the protection of a confidential source, reports say.
As with countless other publications, AI content direct-to-reader without human oversight is an absolute no for the NYT for the time being, clearly influenced by the high trust placed by readers in its content and the belief that AI content is still not as trustworthy as that created by people.
Perfectly timed to shed light on this was a new survey by Australian policy and research group APO, into public perceptions of generative AI in media and journalism. It’s a good read and it echoes similar surveys, showing that public mood towards AI news isn't changing much and audiences are still very circumspect on AI-generated journalism or the use of AI in news.
A theme has emerged
Equally well-timed and just a few days before news of the NYT’s AI adoption, an AWS event for publishers in New York, attended by Glide and a host of other media and publishing luminaries, saw senior industry figures echo the issue of trust in AI as still being the prime concern in their cautious uptake of the technology.
An entire panel was dedicated to the question of maintaining quality, trust, and institutional authenticity amid the growth of AI content, and it reinforced the view that publishers now are focused much more on predicting and managing the negative or positive domino effects the technology can bring than simply using it for workflow tasks.
Figures from organisations as varied as specialist research database publishers, publishing standards authorities, news and book publishers, and specialist publishing law all weighed in on the issue of trust in media in an AI age.
Todd Carpenter, Executive Director of standards authority NISO (National Information Standards Organization), explained that attribution and sourcing is critical to user belief that systems and content can be trusted, while Manu Singh from News Corp described how journalists using the technology are among the first to spot where it goes awry and end up becoming the firebreak which prevents AI errors from going any farther.
Andrew Jones of education and research publishers Wiley described how tracking and audit trails for AI content has become super important - not just for facts to remain trustworthy, but for legal protection in the future - while publishing law specialist Ed Klaris spoke about the perils of trying to assert copyright on AI-produced work, as well as suggesting the use of anti-piracy software to double check that your AI-generated work is not simply an inadvertent copy of someone else’s originals.
While the New York Times is a benchmark, it is telling that the title is not heading into uncharted waters for its AI play. I don’t think this has anything at all to do with its court cases against OpenAI and Microsoft, or that it will markedly change based on any judgement in the case.
Reader trust still plays a huge part in the company’s DNA, and for now it is evident that whatever ambition there is within the title to leverage AI, it will not do so at the risk of its audience’s regard for its content.
This article was written for INMA, and can be read at its site here.
READ MORE ABOUT AI AND GLIDE PUBLISHING PLATFORM
To learn more about Glide Publishing Platform and GAIA, request to speak to a Glide product specialist.
Read more about Amazon Bedrock here.
How does Glide Publishing Platform work for you?
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demo