Hey there, and welcome back to your seat at the Epic Table!
Today’s article has been a very tough one to write. That’s in part because of where I’m sitting. This is going to feature more of my opinion than other pieces, and I’m speaking a little bit more for myself than I am for Epic Table Games in general. I’m going to try to be really careful to be clear when I’m speaking on my own behalf versus on ETG’s, but please bear with me if I fail to do so.
First, the news: as of July 19th, 2023, OneBookShelf (owner/operator of DriveThruRPG, DM’s Guild, and similar) added the following to their AI-Generated Content Policy:
While we value innovation, starting on July 31st 2023, Roll20 and DriveThru Marketplaces will not accept commercial content primarily written by AI language generators. We acknowledge enforcement challenges, and trust in the goodwill of our partners to offer customers unique works based primarily on human creativity. As with our AI-generated art policy, community content program policies are dictated by the publisher that owns it.
This is, obviously, a huge change. I’ll break down the implications, but first I want to get on the same page about what “AI language generators” actually means. Mostly because a lot of people tend to get it wrong.
AI For An Eye
When we talk about “AI language generators,” what we’re really discussing are the class of AIs called language models. This includes ChatGPT, of course, and most of its clones/competitors, but there are actually a wide variety of programs that fall under this umbrella. As it happens, you probably have at least one in your pocket (or on the desk next to you [or on the device you’re reading this with, the point is it’s on your phone]). Autocomplete is a language model. Predictive text when you’re typing on a phone keyboard is a language model. And believe it or not, both of them basically do the same thing as ChatGPT on a much smaller scale.
All that a language model does is try to predict the next word in a sentence given certain criteria. Autocomplete and predictive text (which is actually just a kind of autocomplete) do this by keeping track of what words you use and what order you use them in. Yes, this does mean your phone is keeping track of everything you’re saying; this is 2023, don’t be shocked. By doing this, it can then suggest the same word when you type in something similar again. This is, for example, how it knows to put “bastards” instead of “beautiful” when you type “all cops are b”. (Or it should.)
You may have heard ChatGPT and others described as an “LLM.” That just means “large language model.” ChatGPT is, essentially, the world’s most advanced autocomplete. This is why if you use it you’ll see words show up one at a time. Rather than just training based on you, it’s trained on as much data as its creators could possibly access to put in it. I’m also oversimplifying some really important technical differences here, mind, but the basics hold.
And yes, this does mean that anything generated by ChatGPT is essentially the equivalent of those “Let autocomplete finish the sentence!” memes. Just, y’know, on a much more impressive scale.
Learning From Our Mistakes
Now, this is where it starts to get tricky: is LLM-generated content a bad thing?
The easy facet of this is inherent quality. LLM-generated content guesses what words to use based on how everything it’s trained on already uses them. In other words, it’s programmed not to innovate. For some uses this is great news; for a lot of them—I’d even go so far as to say most of them—this just means that you’re gonna get the most average text possible. If you try to use ChatGPT to write an adventure, for example, ChatGPT is going to give you more or less the most ordinary, middle-of-the-road adventure text you could possibly get. This means that while there’s a very real floor to the quality level of the text you get, there’s just as much of a ceiling. No one wants stores filled with flavorless dross, so that’s not great.
And then there’s the ethical question: is AI plagiarism?
The answer—and I am begging you to let me explain this before you execute me—is “no, but the problem is bigger than that.”
Plagiarism is directly copying text from someone else and passing it off as your own. Unless you tell an LLM to directly copy text, it is profoundly unlikely to generate the same text as a given writer—even if instructed to copy that writer’s style. When you tell an AI to copy a given writer’s style, all the program is doing is saying, “What word is this writer most likely to use in these circumstances?” Which… well, if a human were sat down and told to imitate the same writer’s style, that’s exactly what they would ask too. Original text is being generated; no copying is taking place.
The problem, though, is that that is exactly what a human would do—which makes it damn tricky to regulate short of banning the use of the tools. It’d be a ridiculous destruction of liberties to ban humans from trying to imitate other humans. But even given that, we still have to find a path forward regarding how to regulate these models. We’re getting close to the point where machines are starting to (badly) imitate the exact processes humans use, and that means we have to find options beyond regulating the processes themselves.
OneBookShelf has chosen to thread this particular needle by banning works that are “primarily” LLM-generated in nature entirely. Objectively, only banning works that are “primarily” LLM-generated is probably not great because (as their own rule recognizes) there is a lot of grey area in what that actually means. That’s about as much as I can say before I get into my own opinion on the matter, which means we need to have some big, bold text first.
The Following Opinion Is Solely My Own And NOT That Of Epic Table Games In General
I’m going to say that again in case you skipped the header: this next part is entirely me, speaking as me. I am speaking about my own views on this matter and not those of Epic Table Games, Rob, Eli, or anyone else associated with it. That said, this is my message to OneBookShelf:
Ban it, you cowards.
We are beginning to struggle with an entirely new technical frontier. Autocomplete can write books now. We as a society are in no way equipped to deal with the legal and ethical questions that creates. We don’t even have the right words to deal with it. Until we do, we shouldn’t be dumping this stuff wholesale and pushing creative writers out of spaces. I don’t think it’s a foreign concept to say to RPG gamers, “Just because there isn’t a rule saying you shouldn’t doesn’t mean you’re allowed to do whatever you want.”
LLMs are a fascinating and powerful tool with unbelievable potential. It is absolutely not the correct move to dump them into an unregulated space and say, “Do whatever you want!”
So therefore I applaud OneBookShelf for having the guts to make this move. I just wish they’d gone further and banned AI content altogether.
(Note: this next part is still my own opinion.)
Um, Tucker… Doesn’t Epic Table Games Use AI-Generated Content?
*Long, weary sigh*
Yeah. Yeah, we do.
Probably the hardest part of writing this post was figuring out how to address this. Actually, scratch that: the hardest part of this post was looking at my own feelings about AI and recognizing that I’ve been being a hypocrite about it for a while now. As far as I know, all the art used as cover pieces for my posts here has been AI-generated. (And yes, I knew that before the first post.) A lot of other stuff—other art on the website and in products, text in Legend of the Pharaoh King—is AI-generated too.
This is because Epic Table Games is a collaborative project, and other people involved have different points of view than I do. Hell, that’s part of why they brought me on. This article is me expressing my feelings on the matter with the tools I have.
In any event, when it comes to OneBookShelf the decision’s already been made. I hope that it starts a trend in the industry, and I hope other people are listening.
The contents of this post are © 2024 H. Tucker Cobey. All rights reserved.