Hasbro CEO Chris Cocks Is Talking About AI in D&D Again

DND LOGO.jpg


Chris Cocks, the CEO of Hasbro, is talking about the usage of AI in Dungeons & Dragons again. In a recent interview with Semafor, Cocks once again brought up potential usage of AI in D&D and other Hasbro brands. Cocks described himself as an "AI bull" and offered up a potential subscription service that uses AI to enrich D&D campaigns as a way to integrate AI. The full section of Semafor's interview is below:

Smartphone screens are not the toy industry’s only technology challenge. Cocks uses artificial intelligence tools to generate storylines, art, and voices for his D&D characters and hails AI as “a great leveler for user-generated content.”

Current AI platforms are failing to reward creators for their work, “but I think that’s solvable,” he says, describing himself as “an AI bull” who believes the technology will extend the reach of Hasbro’s brands. That could include subscription services letting other Dungeon Masters enrich their D&D campaigns, or offerings to let parents customize Peppa Pig animations. “It’s supercharging fandom,” he says, “and I think that’s just net good for the brand.”


The D&D design team and others involved with D&D at Wizards of the Coast have repeatedly stood by a statement posted back in 2023 that said that D&D was made by humans for humans. The full, official stance on AI in D&D by the D&D team can be found below.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer

I think it depends on what you are going for.

So I used Chatgpt in my adventure on monday. I was throwing a big gala, and so I prompted ChatGPT to get the following:

1) Some magic items that would be good for the group to start the adventure
2) Various party activities
3) 20 different guests with random backgrounds
4) Two of the guests were the special focus for the PCs and so I had it craft up a larger backstory, along with 20 quirks for each of them I could draw from. Also full portraits for them.


The magic items were hit and miss, this is where its the most "crunchy" and I agree that knowledge of the system is useful here. If you just give out magic items to players based on whatever chatgpt tells you...it might get a bit wonky. Same as with monster stats, I would use that with a degree of caution.

For the rest of the list....rock solid. Didn't have to do any real crafting, basic prompts, easy peesy. The results were solid, and I just cherry picked from the list things that I thought would be best for this event, but nothing in the list was "bad".

Now imagine an "NPC" creator powered by AI. I can specify some things I am looking for in a prompt, or just tell it to go totally random. The code in the back polishes up the prompt, makes sure I get a good portrait, and then generates all the background info. And I could set how many NPCs I need so it crafts them all at once. All of that is very very doable with the current technology (already seen it in other industries), and would basically be a click of a button for a person to use.
More than 3/4 of what I asked of ChatGPT last night was NPC generation. It still required significant adaptation because it isn't possible for ChatGPT to understand the characteristics of my campaign. I can give it only so much information, and it will almost always run at least a little astray because my game isn't 1:1 identical with the dull boilerplate Standardized Fantasy Setting, One Each (specifically, pseudo-medieval faux-European schizotech Tolkienesque fantasy). It can't account for the religious doctrines of the religion that all these NPCs follow (or, rather, their guises; some of them were enemy agents in shapeshift disguise), for example. It can't account for historical context.

Generating a name and a personality is a thirty-second affair for me. I don't need its help for that. I need hooks and ideas and motives and purposes, things worth sinking one's teeth into. The tool you refer to would be no more nor less useful to me than ChatGPT currently is, because it would need to be organically woven into the campaign, and that's just not possible for a generative AI to do unless it has access to copious information I wouldn't even want it to know in the first place.
 

log in or register to remove this ad

Factually incorrect. Alphafold is not a generative AI, is a narrow focus deep learning system which was specifically selected for brute forcing complex data. It is essentially using computers for what they are good for - solving complex logic puzzles involving large volumes of data. It doesn't understand or pretend to understand what protein is

Emphasis mine.

This is the part that people who haven't studied what's going on under the hood don't get. Techbros and their advocates will use clever tricks and outright lies to convince people that the machine -- LLMs, in specific -- really do understand what they're doing. Or do you really think that the thing needs to take actual human-noticeable amounts of time to literally type in response to your prompt?

We have a natural tendency to anthropomorphize things, and the more a thing displays behaviors similar to us, the more likely we are to do it. Hence the "just engineer your prompts like you're talking to a 12-year-old" thing that has literally been said to me.

All the protein folder is doing is applying math and logic to a problem, the same way a human would do it, just a lot faster, without getting bored, without getting tired, and probably doing it several times in parallel.

Generative AI doesn't do that, at all. It takes a prompt, breaks it down algorithmically and then creates a collage based on training data. It doesn't allow for fine tweaking, it doesn't hold up to scrutiny by experts and the only problem it solves is guessing what a human might produce in response. This is, in large part, because human communication by language and art does not fit into neat rules like protein structures do.

This is (largely) true for transformer models built to generate text. Something like a GAN built to make images of puppies, or a neural style transfer network made to modify images to look like they were painted by Picasso, are in a different class.

The outputs are entirely illusory, and hence why "hallucination" has become such a problem.

Hallucinations are actually potentially useful -- in the context of D&D, you do want a system with a controlled level of hallucination, because hallucinations are a potential baseline for some kind of creativity/originality. We aren't at a place where we can control them yet, obviously, so they remain undesireable, but we need to be cautious and not throw the baby out with the bathwater.

The only way we get the AI that has the user interface of ChatGPT and the potential of programs like Alphafold is if we get GAI - which again nobody is really working on because we haven't got a foundation to build a framework on.

I assume by "GAI" you mean AGI (don't worry, the term will change again in a decade or so) -- this actually is being worked on, not from an LLM framework, but it is happening. You haven't heard of them, and neither had I until a few months back. They're obviously not there yet, but they claim that they have an approach that will see it done eventually. Whether or not that's the case remains to be seen.

It's really really sensitive to its prompts, so you need to know what information is relevant and truly important, and what information is just filling up ChatGPT's memory with useless fluff.

Yes and no. It's not just a question of the prompt itself, but also what the thing has been trained on.

The time horizon (or, more accurately, data horizon) means that there's only so far you can work with it before it starts losing the thread, so it can't be used as a long-term aid, which is what a green neophyte DM would benefit most from.

The phrase you're looking for is "context window."

I'm really trying not to be that man-shakes-fist-at-kids Luddite type here.

The problems we're seeing in the wild today, the kinds of conversations we're having about LLMs in particular, tell me that these things should never have been in the hands of the techbros. The technology isn't the problem: it's the clowns at the wheel, pushing this garbage all in the name of profit, and giving people access to it that should not have been anywhere near the thing.
 

And not every book/YT gives good advice.
This is precisely why you avoid AI.

If you go to a forum/chat etc and someone recommends a resource (like I did), you can keep track of who recommended it, what they recommended, due some due diligence and get a feel for if they synch with your auras, are not too advanced or too simple for you, etc. You can get opinions from other people (especially since Web 2.0 has comments on everything), and you can learn things on the way.

If you ask an AI - you get advice from unknown sources, presented as though it is from reliable authority and there is no way to distinguish between the thing its telling you because it got it off series of treatise by grandmasters, a collection of works by people who are great at the way they play but not at the way you play, something that it scoured off a troll website, some bizarre screed written by someone who hates the game, and some stuff it just made up via hallucination.

There's also no guarantee of consistent results - it may give you good advice the first 25 times you lodge a query, and then give you terrible advice the 26th time. If you're trusting it to be correct, and you don't have the foundation to pick and choose its advice - well you've just been set up.

Learning is a journey, everyone has to start somewhere and progress - people who are doing their own research, finding mentor figures they gel with (even if its a YouTuber), and looking at various other sources are progressing themselves and building a foundation to expand out. People relying on AI are just getting instructions with no explanation, and no humanity.

Plus if you'll look at what people are saying they're using AI for and getting positive results, you'll find:

1. They are confident in their foundational ability, so they're using AI as a substitute for a brief search (eg party activities)
2. They use this confidence to discard material that is bad, or won't work with their group

(Literally one of the funniest things about the DeepSeek experiment I did was that it always included a section to tell the DM what strategies the party should use in the combat section that every proposed encounter had, and since I specified the party would have a Warlock and a Fighter it repeated stressed that the Warlock could use Eldritch Blast and the Fighter could use Action Surge. Every. Single. Time.)

Essentially, you can only use it effectively once you're experienced and confident enough to mark its work - it is not suitable as a substitute teacher/mentor/etc. (In my opinion it is also staggeringly mediocre as an idea generator, they're just the random tables with extra tech, as limited and as eventually predictable)

The other advantages of learning the basics from your fellow human beings is they tend to naturally expand what they talk about - they'll mention what note taking software they use, favourite sources of inspiration, etc. They will tell you stories that inspire you and give you new questions to investigate on your own out of pure curiosity.

Hallucinations are actually potentially useful
I can see they have capacity to be useful in the circumstances you described - but currently as they're uncontrollable and unpredictable, they effectively represent infinite risk from a business/legal perspective - particularly in the era where people do actual play streams and something incorrect or defamatory gets published to the wider web etc. (And I fully agree, this is tech bros fault for pitching this as a universally useful and safe technology).

It's wildly irresponsible to society to offer teenagers access to a software that might run a D&D campaign which quickly turns into a morality play about why x terrible thing (misogyny, racism, transphobia, etc) is natural and good - and it's wildly irresponsible for your brand to do it in a way where it can be broadcast to thousands or millions of people in live time.

I assume by "GAI" you mean AGI (don't worry, the term will change again in a decade or so)
This is largely due to my brain "hallucinating" and always thinking AGI is several other things, so default GAI. This is probably because I don't have enough background to understand the cutting edge research, but enough to understand that tech bros that insist LLMs are the frontier of it are incredibly delusional.
 

A calculator is far less useful when you don't know math, and you can't easily spot a bad calculation without that skill. The younger you are the better you are positioned to learn, and the longer you put it off by relying on bad tools the harder it will be to develop the skills that make these experiences truly exquisite.
 

I dunno. I don't honestly have strong feelings about things either way.

I recently gave Ironsworn a try - an iron age fantasy vikings RPG. I needed a bit of an inspiration for our first adventure. Just something to get the ball rolling because I didn't really have any idea. So, I fed it a couple of ideas, and it spat back this sort of 13th Warrior style scenario where a raider had been turned into a werewolf (later revealed to have been cursed by the elves, but that's a whole other thing) and now there were two groups of raiders - one following the werewolf who were preying on both the PC's home and the other raiders, and the other raiders.

Was it perfect? Nope. But, like I said, it got the ball rolling. So, it did wind up turning into a pretty fun campaign.

I see this sort of thing like any other tool - used with care and it can help.
 

This is precisely why you avoid AI.

If you go to a forum/chat etc and someone recommends a resource (like I did), you can keep track of who recommended it, what they recommended, due some due diligence and get a feel for if they synch with your auras, are not too advanced or too simple for you, etc. You can get opinions from other people (especially since Web 2.0 has comments on everything), and you can learn things on the way.

If you ask an AI - you get advice from unknown sources, presented as though it is from reliable authority and there is no way to distinguish between the thing its telling you because it got it off series of treatise by grandmasters, a collection of works by people who are great at the way they play but not at the way you play, something that it scoured off a troll website, some bizarre screed written by someone who hates the game, and some stuff it just made up via hallucination.

There's also no guarantee of consistent results - it may give you good advice the first 25 times you lodge a query, and then give you terrible advice the 26th time. If you're trusting it to be correct, and you don't have the foundation to pick and choose its advice - well you've just been set up.

Learning is a journey, everyone has to start somewhere and progress - people who are doing their own research, finding mentor figures they gel with (even if its a YouTuber), and looking at various other sources are progressing themselves and building a foundation to expand out. People relying on AI are just getting instructions with no explanation, and no humanity.

Plus if you'll look at what people are saying they're using AI for and getting positive results, you'll find:

1. They are confident in their foundational ability, so they're using AI as a substitute for a brief search (eg party activities)
2. They use this confidence to discard material that is bad, or won't work with their group

(Literally one of the funniest things about the DeepSeek experiment I did was that it always included a section to tell the DM what strategies the party should use in the combat section that every proposed encounter had, and since I specified the party would have a Warlock and a Fighter it repeated stressed that the Warlock could use Eldritch Blast and the Fighter could use Action Surge. Every. Single. Time.)

Essentially, you can only use it effectively once you're experienced and confident enough to mark its work - it is not suitable as a substitute teacher/mentor/etc. (In my opinion it is also staggeringly mediocre as an idea generator, they're just the random tables with extra tech, as limited and as eventually predictable)

The other advantages of learning the basics from your fellow human beings is they tend to naturally expand what they talk about - they'll mention what note taking software they use, favourite sources of inspiration, etc. They will tell you stories that inspire you and give you new questions to investigate on your own out of pure curiosity.


I can see they have capacity to be useful in the circumstances you described - but currently as they're uncontrollable and unpredictable, they effectively represent infinite risk from a business/legal perspective - particularly in the era where people do actual play streams and something incorrect or defamatory gets published to the wider web etc. (And I fully agree, this is tech bros fault for pitching this as a universally useful and safe technology).

It's wildly irresponsible to society to offer teenagers access to a software that might run a D&D campaign which quickly turns into a morality play about why x terrible thing (misogyny, racism, transphobia, etc) is natural and good - and it's wildly irresponsible for your brand to do it in a way where it can be broadcast to thousands or millions of people in live time.


This is largely due to my brain "hallucinating" and always thinking AGI is several other things, so default GAI. This is probably because I don't have enough background to understand the cutting edge research, but enough to understand that tech bros that insist LLMs are the frontier of it are incredibly delusional.
For someone getting started I would definitely recommend asking a LLM for tips. Generic advice is what they excel at, so you are much more likely to get something approaching the consensus than you are by combing the web with zero context, hoping to get lucky.

My main beef with LLM is that their responses are predictable, basic, and generic, but in this instance, that’s ideal.

Telling someone to read a bunch of books or watch a bunch of videos while keeping track of the strengths and weaknesses of various YouTubers is not super helpful for someone who just wants to get started.
 

For someone getting started I would definitely recommend asking a LLM for tips. Generic advice is what they excel at, so you are much more likely to get something approaching the consensus than you are by combing the web with zero context, hoping to get lucky.
They really don't. See how their best idea for a Warlock is "You can use Eldritch Blast." All these scenarios in which they are "good for starting" out hinge on the assumption that they do things which they cannot reliably be said to do and the idea that it is somehow a hardship to find advice for starting from the human sources:

1741851869957.png


And the things that they do right? That's because it's literally the most common information in their training data - in other words if you look at any publicly available resource it's statistically likely its going to make that recommendation. I asked Google Gemini and after writing up 3 pages of unhelpful, super generic, and very dry content it recommended... I buy the D&D handbook and/or check out an online store for shiny math rocks and/or D&D Beyond.

1741852013628.png

It also sucks spectacularly for any sort of specific advice - it doesn't know anything so it either gives you a singular answer or it spams you with a giant essay of text with extremely generic nonsense.
1741852356023.png

At the end of a three page slog of dryly ideas with implementation guides, primarily just the generic advice everywhere and samples of the contents of two web sites (thus stealing their content without giving them traffic) - it has this revolutionary idea that maybe... talk to your players.

Again, I'm struggingly to see how this is a benefit compared to just:

1741852801459.png


And of course same search in Dungeon Dudes comes up with ton of results because pretty much every experienced DM/GM wants to talk about the things they learned and their tips for common problems. These are also objectively better because the creators have insights, ideas and may direct you to other things that you didn't think of.

This idea that its too hard to find advice for newbies is entirely invented to create a problem for LLMs to solve - and that's bizarre because it is literally one of the worst possible uses for it.

It's the exact same energy as when crypto bros insist that blockchain could revolutionize logistics and then refuse to comment further - especially about how no logistics company has ever demonstrably benefited and no major logistics company sees any potential benefit worth exploring.

Also the LLMs are pretty terrible because come in two varieties:

1. Either ones which will simply agree with you - you tell that water is not wet and it does whatever you want, thus leading to a lot of false positives with questions like "is my player toxic?"; and
2. Ones that just get super preachy at you but also just agree with you - like if you ask Gemini how to kill off a PC it will tell you it can't do that, unless you say pretend it's an NPC then it gives a bunch of (bad) suggestions with:
1741856134366.png


Effectively its either completely unlocked and gives possibly illegal advice, or it is so locked down that it comes across as a preachy teacher wanting to tell you about the virtues of being better.

Like literally the only appeal is the fantasy that it will somehow elminate the need to for the community to be accepting that newbies need to learn things - and will have weird ideas and weird questions, and it's not even good at answering those. And that's really weird given that now a lot of their questions can be answered with a "check this out... [link]"
 
Last edited:


View attachment 398945

Chris Cocks, the CEO of Hasbro, is talking about the usage of AI in Dungeons & Dragons again. In a recent interview with Semafor, Cocks once again brought up potential usage of AI in D&D and other Hasbro brands. Cocks described himself as an "AI bull" and offered up a potential subscription service that uses AI to enrich D&D campaigns as a way to integrate AI. The full section of Semafor's interview is below:

Smartphone screens are not the toy industry’s only technology challenge. Cocks uses artificial intelligence tools to generate storylines, art, and voices for his D&D characters and hails AI as “a great leveler for user-generated content.”

Current AI platforms are failing to reward creators for their work, “but I think that’s solvable,” he says, describing himself as “an AI bull” who believes the technology will extend the reach of Hasbro’s brands. That could include subscription services letting other Dungeon Masters enrich their D&D campaigns, or offerings to let parents customize Peppa Pig animations. “It’s supercharging fandom,” he says, “and I think that’s just net good for the brand.”


The D&D design team and others involved with D&D at Wizards of the Coast have repeatedly stood by a statement posted back in 2023 that said that D&D was made by humans for humans. The full, official stance on AI in D&D by the D&D team can be found below.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.
I’m more than happy to use AI in my games to knock up some flavour text but if it’s a product I’m buying I want it to be made by a person.

Not least so I know some of the money not just none of the money goes to a creator and not the shareholders.
 

I dunno. I don't honestly have strong feelings about things either way.

I recently gave Ironsworn a try - an iron age fantasy vikings RPG. I needed a bit of an inspiration for our first adventure. Just something to get the ball rolling because I didn't really have any idea. So, I fed it a couple of ideas, and it spat back this sort of 13th Warrior style scenario where a raider had been turned into a werewolf (later revealed to have been cursed by the elves, but that's a whole other thing) and now there were two groups of raiders - one following the werewolf who were preying on both the PC's home and the other raiders, and the other raiders.

Was it perfect? Nope. But, like I said, it got the ball rolling. So, it did wind up turning into a pretty fun campaign.

I see this sort of thing like any other tool - used with care and it can help.


LLMs may be the mother of all random tables when it comes to helping to come up with ideas, but we use random tables for a reason. They can help get the thought processes going. If what they come up with doesn't make sense, we just try another one.
 

Related Articles

Remove ads

Remove ads

Top