Content Marketing
Humanizing AI-Generated Content: A Guide for Content Teams

Humanizing AI-Generated Content: A Guide for Content Teams

March 1, 2023
4 min read

Artificial intelligence (AI) has changed how we create and consume content.

While most people equate generative AI with ChatGPT, it can create everything from text to images, and even music. Exciting as that may be, there’s some anxiety that using generative AI without human intervention could lead to unintended consequences.

To explore this concern, let’s look at how to make sure the output from generative AI is more human.

How Generative AI Works

The best way to use any tool is to first understand how it works. We’ll use ChatGPT as our example, since the same concepts extend to any generative AI application, whether it’s text, image, or audio.

Stephen Wolfram wrote an excellent article on how ChatGPT, and large language models (LLMs), work. In the simplest of terms, explains the author, it’s “just adding one word at a time.” It can do that convincingly, having “learned” patterns and relationships between words and phrases through training on massive amounts of text data.

Obviously there’s far more to the story, which you can read for yourself. But it’ll suffice for our purpose. After all, you don’t have to know how to build a car to drive one.

So, analyzing those billions of pages helps the model determine what’s likely to come next, given the text it already has. It’s quick and efficient. But it’s not perfect.

Why Humanizing the Output from Generative AI is Necessary?

There’s no doubt that generative AI can be incredibly useful. But using it without any human supervision can lead to unintended consequences, and they’re not good. Here are some reasons humanizing the output from generative AI is necessary:

  1. Lack of creativity: while generative AI can string words together in different combinations, it can’t experience things in the way a human can. By itself, it may produce unoriginal and uninspiring content.
  2. Bias: there’s the potential for bias in what generative AI produces. Since it learns from existing data, any bias contained in that dataset could be reflected in the output. As a result, the generated content may be considered unfair or discriminatory.
  3. Inappropriate content: likewise, a generative AI system may create violent or offensive content if trained on violent or offensive data.
  4. Factual inaccuracy: generative AI has been trained to “make stuff up” and can do so very convincingly. Being a machine, it doesn’t know that this is wrong. But those mistakes can lead to serious consequences. It’s even more critical when dealing with your-money-or-your-life (YMYL) topics, or when a generation has lots of statements of fact, numbers, dates, proper nouns, etc.
  5. Brand Image: if content produced by an AI system is low quality, biased, inappropriate, or inaccurate, it can damage a brand’s reputation or worse.

The Role of Generative AI on Your Content Team

Generative AI can enable you to become “ignorable at scale,” Jay Acunzo says. And at worst, it could cause harm. But introducing humans into the process changes the dynamics.

Here are some considerations when implementing generative AI:

  1. Evaluate the training data, if possible: verify that the model uses data, carefully curated to avoid bias and inappropriate content. If the software vendor can’t confirm this, then assume the worst. It’s not a deal-breaker, but you’ll have to be extra vigilant when editing the output.
  2. Create and use guidelines: all content, whether generated in whole or in part, needs to meet certain standards. You don’t necessarily have to govern how to use the tool, but rather its output. Those guidelines typically cover aspects such as appropriateness, bias, quality, and creativity. You may already have these in place — make sure to apply them, especially, to content generated by AI.
  3. Re-evaluate the way you create content: don’t substitute generative AI for human writers. Instead, use it to augment their output. Generative AI may require more contributing editors, where they function as part writer and part editor.
  4. Refine the editing process: use an AI text classifier to check for AI-generated text, like you would a plagiarism checker. They’re not perfect and it won’t provide you with a binary choice. But it may alert you to instances where an editor needs to be more vigilant. The old motto of “trust but verify” only applies to humans. When generative AI is involved, it’s just “verify.” Thanks to generative AI, we may see the rise of the subject matter expert editor (SMEE) to fill this gap.
  5. Verify: when checking for appropriateness, look for usage of proper nouns, numbers, dates, declarative statements and strategic insight that could be debateable or could conflict with your brand voice. Also, avoid generating content that absolutely must be correct, if you don’t plan on fact-checking rigorously.

How to Humanize Generative AI Output

While content production is undergoing disruption, the need for humans remains. No surprise here, but the only way to humanize generative AI output is to include humans into the process. So, if you’re thinking ChatGPT is your content production easy button, look elsewhere.

Here are some ways to make generative AI output more human:

  1. Insert humans into the loop. I’ve said it already, but it bears repeating. Humans need to be part of the decision-making process — both in writing and editing.
  2. Motivate writers to contribute their expertise. Large language models can at best mimick competence. But there’s no substitute for the real thing.
  3. Encourage writers to elaborate on points generated by AI, sharing a story that reflects their unique experience.
  4. Make sure that every piece, where possible, establishes a clear point of view that reflects the writer’s experience. Generative AI can fake this to a certain extent, but there’s nothing like the real thing.
  5. Use idioms — that’s a phrase with a figurative meaning. The AI can’t wrap its head around that one. See how I used an idiom there!?
  6. Use your thoughts and feelings to bring depth to the generated output. Generative AI has no feeling. The best it can do is imitate.
  7. Offer unique insight through the connection of different concepts that others may not have considered. Referring to the way generative AI works, ChatGPT can’t do this because it doesn’t “know” anything.

Takeaways

Generative AI can be remarkably advantageous. But using it without any human supervision can lead to unintended consequences. Here we’re speaking of the type that’s not good. You can avoid these through careful consideration of how to responsibly employ AI in the content production process.

Some content is purely functional and the expectation level is low. Take corporate earnings reports, for example, where factual accuracy is the prime requirement.

But when the expectations are higher, pure generative AI is bound to disappoint. The only way to avoid that situation is by keeping humans in the loop.

Use AI Responsibly

AI can positively impact content teams when used conscientiously. If you’re concerned about the caliber of your content, but don’t know where to start, MarketMuse’s AI-powered platform can objectively evaluate your content quality. Try MarketMuse today.

Get a Free Consultation
for Content Marketing

Jeff Coyle

Jeff Coyle is the Co-founder and Chief Strategy Officer for MarketMuse. Jeff is a data-driven search engine marketing executive with more than 21 years of experience in the search industry. He is focused on helping content marketers, search engine marketers, agencies, and e-commerce managers build topical authority, improve content quality and turn semantic research into actionable insights. His company is the recipient of multiple Red Herring North America awards, multiple US Search Awards Finalist, Global Search Awards Finalist, Interactive Marketing Awards shortlist, and several user-driven awards on G2, including High Performer, Momentum Leader and Best Meets Requirements. Prior to starting MarketMuse in 2015, Jeff was a marketing consultant in Atlanta and led the Traffic, Search and Engagement team for seven years at TechTarget, a leader in B2B technology publishing and lead generation. He earned a Bachelors in Computer Science from Georgia Institute of Technology. Jeff frequently speaks at content marketing conferences including: ContentTECH, Marketing AI Conference, Content Marketing World, LavaCon, Content Marketing Conference and more. He has been featured on Search Engine Journal, Marketing AI Institute, State of Digital Publishing, SimilarWeb, Chartbeat, Content Science, Forbes and more. For additional information, visit MarketMuse.com