Skip to Content

Using Chatbots in Your Writing? Don’t Skip Human Editing!

Many small and midsize businesses face a familiar dilemma: they have limited staff members who juggle multiple tasks, including writing. With so many responsibilities, staff members find it nearly impossible to carve out the time needed to write professional-level copy. And there’s no budget to hire a dedicated writer. 

Chatbots are giving them a new way to solve the problem.

Chatbots are a type of generative AI trained on enormous amounts of data to give them advanced natural language capabilities. Together with low cost (they’re certainly cheaper than an employee), chatbots give resource-strapped companies access to desperately needed writing support. An overwhelmed staffer can use a chatbot to draft marketing copy, reports, customer service scripts—just about anything on their writing tasklist.

Chatbots also save a significant amount of time, doing in just seconds what a human would need hours for. Chatbots like Anthropic’s Claude and OpenAI’s ChatGPT can generate outlines and rough drafts at the click of a button. 

But can they produce professional-level writing? Can you just give a chatbot a few simple directions and get copy that’s ready to go?

It’s not quite that easy. And you still may need to hire a professional to help. Let’s take a look.

Chatbot Writing Isn’t Polished

Chatbot writing is far from polished. Without guidance, It can sound awkward or overly formal. This might work well for formal or impersonal text. For example, I asked ChatGPT the following:

Write a paragraph for a policy that describes how to create a workplace free of bullying, harassment, victimisation and discrimination. Include staff responsibilities and manager respsonibilities.

The misspelling of responsibilities was an accident, but it didn’t bother ChatGPT. I don’t remember why I used British English spellings, but note that it retained those spellings. Here’s what it gave me:

Creating a workplace that’s safe, respectful, and free from bullying, harassment, victimisation, and discrimination is a top priority for us. As a staff member, your responsibility is to treat your colleagues with respect, kindness, and consideration, regardless of their background, identity, or role. This means refraining from any form of disrespectful or offensive behavior and reporting any concerns promptly. Our managers, on the other hand, play a vital role in upholding this commitment. They’re responsible for setting a positive example, promptly addressing any reported incidents, and ensuring that appropriate actions are taken to prevent further occurrences. We believe that by working together, we can create an inclusive and supportive environment where every team member feels valued and empowered to contribute their best.

Overall, it’s not bad. If this were a real project, I would tweak it by putting the first sentence into active voice (We prioritise creating a workplace that’s safe…) and fixing the modifier problem in the second sentence (your responsibility is not a staff member; you are). But for the most part, it’s usable.

When the writing style needs to be less formal, the chatbots struggle a bit. I asked Claude:

Provide a real-life case study of a midsize company working with an AI solution provider.

It provided me with a case study of about 400 words. Here’s a sample of its response: developed a suite of custom AI algorithms and models tailored to Vantage Apparel’s supply chain needs. These solutions included:

  1. Demand forecasting models that analyzed historical sales data, market trends, and customer behavior to generate accurate demand predictions.
  2. Inventory optimization models that determined optimal stock levels based on predicted demand, lead times, and service level targets.
  3. Logistics optimization models that identified the most efficient routes and shipping methods to ensure timely delivery to customers.

Again, it’s not terrible. But it’s not very engaging either. Giving Claude a more descriptive prompt or giving it subsequent prompts to revise the text could help. I followed up with this:

Revise the following text to be more engaging for a blog aimed at supply chain managers at midsize companies. The blog is published by a supply chain consulting firm whose primary clients are midsize companies. The audience is hesitant about introducing AI into its workflow.

It gave me 330 words that start like this:

First up, they’ve got demand forecasting models that are like having a crystal ball for your sales. These models analyze your historical sales data, keep a pulse on market trends, and even take into account customer behavior to generate scarily accurate demand predictions. No more guessing games – you’ll know exactly what’s coming your way.

I asked for another revision of “no more than 100 words.” It gave me:

Attention supply chain managers: drowning in data?’s custom AI solutions are your lifeline. Their demand forecasting models predict sales with eerie accuracy, inventory optimization models keep stock levels in check, and logistics optimization models ensure on-time deliveries. AI may seem intimidating, but makes integration seamless. Embrace the power of AI and watch your supply chain soar. Improved forecasting, reduced costs, and happy customers await. Don’t let fear hold you back – let guide you to success. Your bottom line will thank you. Take the plunge and revolutionize your supply chain today.

It sounds like marketing-ese to my ear: loaded with clichés and jargon with too heavy a staccato rhythm to sound natural and engaging. Feeding it more information, particularly about brand voice, I might get closer to something that fits what I’m looking for. I could ask it to vary sentence structure, avoid certain words or clichés, or use a specific rhythm. Or I could do the editing myself. 

The work could be worth doing if I thought the general ideas or structure were worth building on. Or it might inspire me to go in a different direction. Either way, it’s quickly given me a starting point. And it’s perhaps on par with work an intern or very junior writer might hand in as a draft.

These examples present another problem, though: hallucinations and incomplete information.

Chatbots Sometimes Make Facts Up

In the Claude exercise, I asked for “a real-life case study.” Were and Vantage Apparel real? I asked Claude if Vantage Apparel was real, and it told me that it wasn’t and that it had made up a case study for the prompt. Except that, yes, Vantage Apparel is real, though I didn’t find anything online that suggested it worked with 

Could Claude get its facts straight, please? Making a broad request for a specific answer leaves the chatbot flailing to deliver. I could feed the data needed for a case study and ask a chatbot to write up the case study to get a factual draft I can work with. I’d still need to be sure it hasn’t improvised, though. 

Look back at the ChatGPT example. I didn’t tell it what the policy was, only what the intended goal of the policy was. ChatGPT responded with its best guess of how to get there. I hope we can all agree that treating colleagues with respect, kindness, and consideration will help create a healthy workplace. Reporting concerns to stop disrespectful or offensive behavior also seems standard. But these are generalizations and the policy needs a lot more detail: How should managers deal with any reports? Is the manager the right person to report issues to? Maybe the company already has a reporting process through HR. What actions should someone take to prevent further occurrences? 

I had asked a general question and restricted how much information to respond with (“a paragraph”). I received an answer appropriate to the question I asked: general and short. If I wanted ChatGPT to write a real policy, I would need to add more information to get a more detailed response. And I would still have to read that response carefully to identify what might not work for the company.

Chatbot as Assistant, Not Writer

The results from generative AI chatbots are often rough around the edges. The generated text may contain factual errors, stray from the original prompt, or lack cohesion and flow. Chatbots may make up information in order to fulfill the request or fail to capture the intended tone and style. While chatbots can mimic human writing, it takes a human being to refine the text into a professional finished piece.

On the other hand, chatbots offer the potential to expedite a writing project by doing some of the work for us. Our prompts have to be detailed, and we need to be willing to revise both the prompts and the results.

Chatbots are best used in the early phases of writing. Consider using it for brainstorming topics, creating outlines, or writing rough drafts. The goal should be to give yourself a head start, while still planning to invest time reworking AI’s suggestions. 

Always review AI-generated text carefully before declaring a draft complete. Ask yourself:

  • Does the tone and style match my brand?
  • Did the AI follow my original directions? 
  • Are there any factual inaccuracies or false claims?
  • Is the writing smooth and cohesive from start to finish?

You may not need a professional writer to get you a rough draft, but you’ll need to think like a writer to get the results you want. And if hiring a professional writer is out of the question, consider hiring an editor to work more deeply on the chatbot’s draft. An editor can review AI drafts with a critical eye, identifying areas that need improvement, especially when we know AI has been involved in the process. This way, you benefit from the computer’s raw productivity and an editor’s critical judgment and corrections.

Interested in optimizing AI writing tools for your business content needs? Let’s talk! Book a consultation today to explore how human–AI collaboration can boost your content production. With the right strategy, AI can enhance your writing rather than replace it. Let’s find the best approach for your company.


Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.