Ethics in AI-Driven Content: Finding the Right Balance
Laura Hunter
Let’s be real, we’ve been questioning the ethics of AI since its dawn in the 1920s, but it’s only been since ChatGPT’s massive debut in late 2022 that we’ve really started digging into it.
We’ve come a long way since Japanese professor Makoto Nishimura built the first robot (his name was Gakutensoku) or computer scientist Edmund Callis Berkley published ‘Giant Brains, or Machines that Think’, a groundbreaking book comparing newer computer models to human brains.
Today, who doesn’t log in to ChatGPT and ask it to write a fun Instagram caption, create a budget, or put together a weekly shopping list? Even Google’s Gemini chatbot makes it a breeze to find answers to our questions without even opening a web page.
Drafting an article or social post has also never been easier for content creators. For brands, it’s quicker and cheaper than hiring a human writer.
But when does this amazing AI-created content become too much? Can AI really replace human creativity, authenticity and fact-checking, all of which make content so thoughtful and engaging?
The issue with AI-generated content and why brands should care
Using AI to create content is not entirely bad, but there are a couple of things we need to consider – like bias, accuracy and ownership.
Since AI learns from human-generated data (remember the more we tell it, the more it learns), it can inherit unconscious biases, which can become more pronounced as AI systems grow. By perpetuating our biases as humans, AI-generated content could influence the total fairness and accuracy of AI-generated information. This is also reflected in unrepresented datasets, whereby a lack of data simply does not add to the full picture.
We also can’t forget about the lack of reliable sources. Have you noticed the disclaimer at the bottom of your ChatGPT page? It states, ‘ChatGPT can make mistakes. Check important info.’
AI tools like ChatGPT generate information from a vast number of sources, meaning its factualness is not guaranteed. This is exactly where human writers play a critical role – we’re gatekeepers tasked with screening the information we consume and then share. We also have the critical thinking skills to assess the validity and relevance of sources we use when creating content – this isn’t the same for most AI tools.
For brands, accuracy is as important as ever. Misleading or incorrect information can erode trust between brands and readers, compromise the integrity of the content we publish online, as well as risk the deterioration of a positive brand image.
From a content quality perspective, nothing beats a well-thought-out, helpful article or a relatable Instagram post worthy of being shared with a friend. AI-generated content often lacks the emotional depth human writers bring to content. It’s the unique perspectives and independent narratives that make content so valuable. It’s the human experience, nuances in our storytelling style, an ability to make a joke and the creative originality that are missing when AI pulls a piece of content together.
In today’s digital landscape, where topic saturation and authenticity are some of the biggest challenges, human-made content has an opportunity to really thrive. Brands invest heavily in content creation, trusting marketing professionals to uphold quality. Plus, the integrity and effectiveness of content have a direct impact on performance, impacting engagement and conversion.
Perhaps the most pressing issue about AI-generated content is intellectual property and data privacy – who owns AI-generated content? Can it be copyrighted, and if so, by whom?
According to the Copyright, Designs and Patent Act 1988, AI-generated content can be protected by copyright, but the original sources of answers generated by AI might include copyrighted material. The law allows AI developers to pursue text and data mining, but only for non-commercial purposes. What this content is then used for outside of AI systems is still a grey area under copyright law.
Regulatory frameworks like the EU AI Act (the first of its kind enforced from August 2024) are impacting the way developers create AI systems and how businesses can use them. As more regulations emerge, compliance with ethical AI standards will need to be upheld without hesitation to avoid legal and reputational risks.
The Act itself aims to set a regulatory framework that guarantees safety, fundamental rights and human-centric AI across the EU. To protect sensitive data and individuals from exploitation, some of the prohibited acts include using AI systems for biometric categorisation and identification, the deployment of subliminal techniques to exploit human behaviour, the use of AI in recruitment processes, and even record-keeping purposes.
How to use AI with integrity
While taking these considerations into account, there’s still a way we can leverage the power of AI in content creation – without sacrificing quality or our integrity.
Our Data Operations Director, Eleni Sarri, weighs in:
“Synergetically merging the human element with AI ensures authenticity, while AI can generate content, we can then give it meaning. Brands that prioritise the human touch ratio over AI, as well as ensure transparency on how AI is being integrated and checked, will ultimately stand out and create a sense of trust.”
Here are some of our tried-and-tested best practices:
- Balance tasks between AI and human creators: A few content planning tasks can be supported with AI, such as structuring a brief, brainstorming topic ideas or researching trending news or hashtags. Then, it becomes a human writer’s responsibility to edit, conduct wider research, as well as refine content for accuracy, creativity and emotion.
- Make fact-checking compulsory: Since most AI tools cannot give us a reliable list of sources, the onus is on us to fact-check any quantitative data, as well as review any sweeping statements or ideas that are AI-generated. By cross-checking incorrect or misleading information, we maintain credibility and trust.
- Disclose your use of AI: Don’t be shy to explain AI’s role in content creation – it actually works to build trust and uphold ethical best practices. To be completely transparent, I used ChatGPT to create a skeleton structure for this article, but I have a trusted list of sources from which I got my information.
The ethical challenges of AI-generated content are still evolving, and more updates are likely to emerge. In the meantime, if brands and writers can find the sweet spot between taking advantage of AI’s speedy content generation capabilities and supporting the human ability to fact-check and add emotion, a winning combination will be made.