Skip to Content

AI in Publishing: Trust Issues Writers Need to Be Aware Of

The publishing industry is experiencing another technological disruption with the rise of generative AI (GenAI). While GenAI tools can help authors in several ways, they also introduce new legal and ethical concerns. 

Authors are deeply divided on whether AI can be used in writing. According to a recent BookBub survey of over 1,200 authors, 48% do not use AI, nor do they plan to. 

If writers (and editors!) are going to use GenAI, what do we need to know about it, what should we be concerned about, and how can we use it ethically? I attended the recent Legal and Ethical Implications of AI in Editing, hosted by ACES, with experts in publishing law, privacy, and copyright. They shared insights on what writers and editors need to know about AI. 

Patricia Loo, rights and permissions officer at the International Monetary Fund, kicked off the webinar by reminding us that publishing is built on trust between authors, publishers, and readers. AI can create hallucinations, “tortured phrases,” and false information that undermines this trust, she said. Authors using AI in their work need human oversight to catch these errors.

When AI Breaks Publishing’s Trust

We can see how breaking that trust harms everyone in the recent story about an AI-generated summer reading list. The newspapers who purchased the Heat Index section assumed that the section provider, King Features, was providing quality content—that the content had been professionally written and professionally edited. King Features put all the responsibility on overworked, underpaid writer Marco Buscaglia who, yes, broke the company’s AI policy. 

Buscaglia admitted to his error and lost his contract with King Features. But what about King Features? It has yet to address the fact that it doesn’t perform any quality control on the content it hires out. Both parties broke trust with the newspaper publishers and their readers, not just Buscaglia.

We need to distinguish between using AI as a writing assistant and using it to generate complete content, said Loo, and I completely agree. As writers, we need to think carefully about our process and the tasks we use AI for. 

According to the BookBub survey, 81% of authors who use AI employ it for conducting research, making this the most common application.

But we can’t take humans out of the process. We need to check AI’s work at every step.

And we need our clients and employers to support us in this adaptation. They can’t be surprised when they ask for the impossible and someone needing a paycheck finds a way to provide it by bending or breaking the rules. 

Wouldn’t the more ethical and trust-building approach for organizations be to admit what they want isn’t doable by one person and provide them with the tools and skills for using those tools to get the work done?

The Real Cost of AI Gone Wrong

The breakdown in trust goes even further. By now it’s common knowledge that AI training datasets often use copyrighted content without explicit permission. Companies are scraping public records, digital communications, and website content, noted Jasmine McNealy, professor in the Department of Media Production, Management, and Technology at the University of Florida, explaining how little legal protection there is currently, especially in the United States.

Publishers and authors are challenging the unauthorized use of their copyrighted materials in courts. The list of lawsuits OpenAI is defending against is long and still growing. The Thomson Reuters v. Ross Intelligence decision offers a bit of hope. In February 2025, a judge ruled that Ross Intelligence had infringed on the copyright of Thomson Reuters, owner of Westlaw.

Yet despite publishers and authors challenging the unauthorized use of their copyrighted materials, we may only be able to stop future use of such materials. It’s not clear that content can be removed from the training datasets or that an AI can be made to “forget” what it learned from its training.

In the BookBub survey, 84% of respondents stated that they don’t use AI for ethical reasons, with a particular concern about copyrighted materials being used in training the AI. 

Loo recommended reviewing the usage terms of any AI tool you use. Can the tool use your inputs for training? 

In addition, check any contracts or guidelines your writing falls under. Does your usage of AI follow these documents? 

Also consider whether your use of AI-generated text falls under fair use. Right now in the United States at least, AI-generated works can’t be copyrighted. McNealy noted that while there are some state regulations that help protect copyright holders, a bill currently making its way through the House and Senate may prevent those laws from being enforced and future laws from being created. There’s currently no protections in federal law.

Keep in mind that the law is always slow to follow technological advances, with both proponents and opponents trying to influence new laws that benefit their position. Keep an eye on your governing jurisdictions and keep good records of your AI use. 

The Disclosure Gap: Most AI Users Stay Silent

Using AI in your writing is a controversial move, and if the BookBub survey is anything to go by, as many of us are using it as are avoiding it. Of those who do use it, 74% of them don’t tell their readers. As we saw with the Heat Index content, when readers find out, they feel betrayed and angry. 

Rebuilding Trust: Expert Recommendations

The ACES webinar panelists collectively offered these tips to increase trust with both publishers and readers and use AI more ethically:

  • Be transparent about AI use in your writing process.
  • Always review, fact-check, and edit AI-generated content.
  • Understand the terms of service for AI tools you use.
  • Include AI-related clauses in your publishing contracts.
  • Keep up with evolving laws and industry standards.
  • Look for closed-loop AI systems that may offer better privacy protection.
  • Realize that even with disclaimers or notices prohibiting AI training, your content may still be scraped.

Publishing is built on trust and, as we’ve seen, when that trust is broken, it helps no one. You can choose not to use GenAI anywhere in your writing process and avoid breaking trust with your publishers and readers. It’s a popular choice, and one that’s ethically sound. Writing Resource writer Sean Brenner has chosen that route.

But you may be interested in finding ethical ways to use it. Although it’s very much in flux, AI is not going away. And Bookbub’s survey suggests that you’re not alone in wanting to explore how GenAI can help you keep pace. 

The key is thinking through the issues of using it and how to solve them. Review your contracts for any AI-related provisions and consider adding a clause to your own. Establish your own ethical guidelines for AI use before you need them, and stay current with how GenAI is affecting publishing.

Good luck!

I used GenAI to help me organize my research notes, identify further resources, and develop my argument.

2 thoughts on “AI in Publishing: Trust Issues Writers Need to Be Aware Of

  1. I use AI to for checking spelling and grammar. However, AI isn’t perfect and I prefer to avoid using it as much as possible. Generally, especially editing, I find reading out loud to be the most beneficial and use an AI generated voice to read my work out loud to me. Writing should always sound as though a person is as realistic as possible. AI tends to overgeneralize and sound very generic with content.

reply

Your email address will not be published. Required fields are marked *