Investis Ltd.

05/30/2023 | Press release | Distributed by Public on 05/30/2023 14:21

How Transparency Is Shaping the Conversation about AI-Curated Content

Transparency has rapidly emerged as a hot-button topic in the rapidly evolving field of generative AI, in which tools such as ChatGPT can be used to quickly create amazingly human-looking content ranging from blog posts to term papers. In fact, generative AI can be used to write complete drafts of content efficiently. But, when and how should the content creator acknowledge that they've been using AI? This question has already rocked institutions ranging from publishing to higher education.

Who Wrote This?

The potential impact of ChatGPT on learning is reverberating throughout the halls of higher education. Recently, Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions. Although ChatGPT performed spectacularly poorly in answering some questions, it passed exams after writing essays on topics such as constitutional law and taxation. This is just one instance that has caused educators to ask how it will be possible to know when students are taking coursework without using AI to help them.

Meanwhile, the publishing world was shaken by the revelation that a respected news outlet, CNET, had been using AI to write stories without telling its readers in a forthright manner. The Futurist reports that the articles are published under the alias "CNET Money Staff," covering topics like "What is Zelle and How Does It Work?" CNET was not transparent about using AI for these explainer-type articles, which outraged readers and staff alike. Since the Futurist article was published, CNET has inserted a visible disclaimer in the CNET Money Staff section, "This article was assisted by an AI engine and reviewed, fact-checked, and edited by our editorial staff."

There is no end to these examples of ChatGPT fooling people, but here's one more to drive home the point: a recruitment team unknowingly recommended ChatGPT for a job interview after the AI was used to complete an application task.

What Should We Do About Transparency?

These are only two among many examples emerging in which the use of AI can create enormous trust issues for two reasons:

  • Who wrote the content? Not being transparent about the use of AI upends our fundamental assumption that content from a human source is in fact created by a human - akin to Deep Fakes fooling us into thinking video content is something that it isn't.
  • How trustworthy is the content?As we blogged recently, ChatGPT is enormously flawed. It's capable of passing off complete fabrications as fact (known as a hallucination). And we cannot trust ChatGPT to properly cite its sources - raising issues of plagiarism and copyright violations, as discussed in this Reddit forum.

But generative AI isn't going away. In fact, ChatGPT is just one of many AI assistants available to writers. There are no easy answers, but it's obvious that organizations need to get out in front of this issue and start setting some ground rules.

For instance, an associate professor at Wharton recently commented on LinkedIn about how he's added an AI policy to his syllabus in order to teach students how to use AI responsibly:

Source: Ethan Mollick

He also published a tutorial to help students and anyone else involved in content creation. Clearly, he understands that AI assistants will change the academic landscape; this requires new frameworks for understanding prompt-making and accountability while using the tools -- how to get valuable outputs from them, while still acknowledging their limitations. 

Businesses need to do the same thing. There are no hard and fast rules here, but to protect their reputations, any organization needs to make it clear to writers:

  • When it is OK and not OK to use AI assistants: Employees and contractors working for an organization should be given clear guidelines on when it is acceptable to use AI (e.g., for content ideation) and when it is not acceptable (e.g., to create entire works in AI and pass them off as humans).
  • How AI is being used in content creation: Although I do not recommend using AI to write entire articles, blog posts, and website copy, I cannot stop businesses from doing that. But I can expect them to tell me when they are - prominently, at the top of the article (as the chastened CNET is now doing), not as an obscure footnote.

There are inevitably grey areas to deal with here. Should content creators disclose every instance of using AI in the workflow? What happens if AI was used for ideation? For writing headlines and subheads in a 3,000-word article, when the rest of the article was written entirely by a human? This is a general guideline to follow: if AI is doing your work for you, you need to disclose that. If it's being used as an assistant? Probably not.

Even better: write great content that is authentically your own. ChatGPT produces human-sounding content, but that doesn't make it good and original. Human beings need to level up their writing and thinking if they want to stand apart from machines.

Contact Investis Digital

At Investis Digital, we create content strategies that can connect the dots between your business services and your customer's needs to help drive revenue. We do all this with a pulse on new technology that's changing the demands on content - such as ChatGPT, omnichannel marketing, headless CMS, and more. Contact us to learn more about the future of content and how we can help you prepare.