Screenwriters, authors and journalists are responding to the threat of AI
5 mins read

Screenwriters, authors and journalists are responding to the threat of AI

The rapid development of artificial intelligence poses new challenges for writers of all kinds, leading screenwriters, writers and journalists to respond to the threat of AI.

Although generative AI tools are not yet advanced enough to fully replace humans, their unregulated use can negatively impact writers’ livelihoods.

Hollywood writers are on strike over AI-generated content

The Writers Guild of America (WGA) is engaged in a heated battle with Hollywood studios over the use of AI to generate content.

The writers felt they were not profiting from the studios’ profits, and the AI ​​directly threatened their pay and working conditions.

The WGA argued that AI should not be used to rewrite or generate literary material without rewarding human authors. They wanted regulations to protect writers’ livelihoods as AI has become more and more advanced.

However, the studios resisted these demands, only offering to discuss technology changes at annual meetings.

Executives seemed content to continue the strike that began in May until union members could no longer pay their bills.

The WGGB drafts recommendations for AI developers

The Writers Guild of Great Britain conducted a poll where 65% of respondents said increased use of AI would reduce their income and 61% feared being replaced.

In response, the union released recommendations for AI developers. In addition to using an author’s work only with explicit permission, AI companies should maintain transparent training records, label AI content, give credit to authors, and establish independent AI regulation.

The guild argued that while AI is not yet mature enough to match human creativity, it still poses risks such as fewer opportunities and fair pay that new regulations need to address.

WGGB also pointed out OpenAI’s study that authors face the most risk when it comes to advancing AI technology compared to other career paths.

Best-selling authors send open letters to AI companies

The Authors Guild has also raised concerns about AI systems like ChatGPT being trained on books without permission or compensation.

Over 9,000 authors — including best-selling authors such as Dan Brown, James Patterson, Magaret Atwood, Suzanne Collins, and Michael Chabon — have signed an open letter to AI companies calling for authors to be fairly compensated for their contributions to AI training data.

In the letter, the guild argued that AI output trained on copyrighted works was derivative and royalties should be paid to the authors.

We know that many of the books used to develop AI systems come from notorious piracy websites. The recent Supreme Court decision in Warhol v. Not only does Goldsmith make it clear that the high commercial nature of your use militates against fair use, but also that no court would condone copying illegally obtained works as fair use.

The overall fear is that cheap AI-generated books could flood the market, making it harder for human authors to make a living.

The arrival of AI threatens to tip the scales and make it even more difficult, if not impossible, for writers – particularly young writers and voices from underrepresented communities – to make a living from their profession.

Journalists worried about AI ‘assistants’

Eventually, Google provided AI tools to news organizations like the New York Times and Washington Post to help create draft news stories.

While this potentially saves time for journalists, it raises fears that it will undermine the craft of quality journalism.

There were also concerns that AI could spread misinformation if not carefully handled. Newsgroups have wanted to responsibly investigate usage, but there have been tensions over protecting the integrity of reporting.

Google issued a formal statement to address journalists’ concerns about their proposed AI technology.

The statement was not met with enthusiasm.

The disruptive potential of AI

The advent of AI raises complex questions about how to balance innovation and ethics.

While advances promise to transform the industry, concerns remain about fair compensation and training practices.

With their livelihoods at stake, writers across industries could struggle to adapt – even those documenting rapid developments and AI advances.

It is up to stakeholders on all sides to work together to find a way forward. There are opportunities to improve society if the human values ​​of justice and personal responsibility remain an integral part of the development of AI.

By joining forces, understanding differing views, and mitigating risk, the author community hopes to maintain its integral role while shaping the future responsibly.

Featured image: KieferPix/Shutterstock