Ready to get started?
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demoIf there is rising panic in the offices of AI companies, it's not surprising. The lawmakers are coming for them, and the people who make the content they need are closing their doors.
Settle down... it's a long one. My editor is away. But it's been an interesting week for anyone in content and AI, so think of it as an Olympian double issue for your holiday flights!
It's hard to believe that Daniel Day-Lewis's bravura "I drink your milk shake!" speech in There Will Be Blood has not been parodied by smug AI techbros in recent years.
In it, Day-Lewis's slithering oil baron Daniel Plainview harangues a land owner, who is convinced there is a fortune in oil under his homestead, for the temerity to offer a deal to Plainview to share the drilling rights.
Increasingly rageful, Plainview berates the land owner with the metaphor of using long connected straws to slurp a milkshake from across a table for how the drilling rigs had already sent their drills sideways from adjacent land to slurp up the land owner's oil from under his feet. The oil was taken already. No deal was necessary.
"I drink your milk shake!" could be the t-shirt slogan for any number of AI firms whose models have been built on the back of stolen content.
Well, they might just need a stomach pump and a fresh straw.
Lawmakers get their teeth back
While AI bosses have been building their overnight empires and essentially claiming there's no such thing as copyright, lawmakers have been crafting responses to the unjoy of anyone in AI.
Yesterday, July 31, saw two notable events in the US which AI- and content-makers should be aware of, for multiple reasons.
First up was the publication by the US Copyright Office of the report 'Copyright and Artificial Intelligence Part 1 Digital Replicas' - the first in a series of planned reports into AI and copyright.
Part 1 focuses on AI which can copy human voices or appearance, and shows the USCO is not messing around, urgently recommending laws to protect individuals (if not consumers) against AI-generated fakery - demanding people retain lifetime unassignable right to demand takedowns, removals, compensation and damages against unauthorised replicas of their voice or person, and licensing rights.
Subsequent USCO reports will address copyrightability of AI works, the use of copyrighted work to train AI, content licensing, and the liability trail of works created by AI - all compelling reads for anyone in the business of content.
On the same day - surely no coincidence - Congress rolled out the NO FAKES Act (yes really... the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024) which explicitly targets deepfakes and voice clones and has left some AI firms all shook up.
It would hold AI and companies "liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording" which never actually happened, including anything created by generative AI.
NO FAKES builds on the Tennessee ELVIS Act protecting musicians, passed in March. It's starting to look like it's now or never for AI firms who want to play nice with content creators. My advice to them? Don't be cruel.Caveats for all the above of course for satire and comment: the First Amendment hasn't gone away.
The EU weighs in
More laws: today, August 1, is day one of the new EU AI Act, the first globally-impactful AI law which seeks to bring insight and control to the risk an AI might present to EU citizens, regardless of where the AI is deployed or created.
It forces AI firms to start being open about things like what the AI could be used for, where and when it has been used, and - most relevant for content creators - how it has been trained.
It's the first law that tells AI companies to open the books on their training data, and - MASSIVELY OF NOTE TO PUBLISHERS AND CREATORS - is also the first to demand that any AI-created/manipulated content shown to audiences is highlighted as such, so it's a law that applies to media firms too.
(There is an exclusion for standard editing tools such as what Glide allows GAIA's text tools to do, but essentially start prepping for having to tell audiences what is AI-made and flagging it in the content. You have between one and two years to get compliant, depending on how beefy your AI tools are.)
Combined, the new and proposed rules on both sides of the Atlantic start to put penalties and obligations in place for anyone creating and using AIs, and are certainly the first in a steady stream of regs due over the next few years.
Content owners get their teeth back
Meanwhile, AI badboys Perplexity.ai - robots.txt ignorers and content scrapers - have tried to quell the ire of publishers enraged at having their content pinched with the most soothing balm of all: money.
They have made a brassy offer to share money with anyone whose content they monetise, whether there is a formal content-sharing deal in place or not, with other sweeteners such as access to analytics data and APIs, and baubles such as free use of Perplexity for their staff and even reader offers.
Of course, when I say they are badboys for ignoring robots.txt and scraping content, all AIs seem to have done that. So, in that light, Perplexity are now the very good boys. No idea if they will be blown out of the water by lawsuits but specifics aside, cash for content, yes please, that's how you win publishers over.
And what prompted this largesse? It might have been legal threats from the likes of Forbes, but it's probably the growing panic that AI models are being progressively closed off from quality data.
A great study by the Data Provenance Initiative, "Consent in Crisis: The Rapid Decline of the AI Data Commons", has looked at the meteoric rise of site owners blocking off AI scrapers, showing that in the last year AI bots have been shielded from something like 25% of the web's top quality data, the stuff critical to prevent models going stale, or even worse - model collapse.
And with perfect timing, a new study into that too.
How can AI bounce back from such varied threats?
Well, at the end of the milk shake scene, Day-Lewis's Plainview, err... beats the land owner to death with a bowling pin. I am sure AI bosses do not see themselves that way, but they are not without weapons.
For example the (very creepily named) Chamber of Progress, "technology's progressive future" which sounds instead like a LLM-generated Harry Potter knock-off.
The Chamber claims to be a body for nice tech people to do good things, and I am sure it is generally trying to achieve exactly that. However, it is hard not to regard its lobbying to essentially remove any copyright protection as the bowling pin AI firms have been looking for.
AIs know that good content is their business advantage, and will do anything to keep getting it. While their days of naked theft are numbered, they are looking to legislators, lobbyists, or licensing, to keep the advantages they already have.
Grab yours while you can.
How does Glide Publishing Platform work for you?
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demo