Hasbro CEO Chris Cocks Is Talking About AI in D&D Again

DND LOGO.jpg


Chris Cocks, the CEO of Hasbro, is talking about the usage of AI in Dungeons & Dragons again. In a recent interview with Semafor, Cocks once again brought up potential usage of AI in D&D and other Hasbro brands. Cocks described himself as an "AI bull" and offered up a potential subscription service that uses AI to enrich D&D campaigns as a way to integrate AI. The full section of Semafor's interview is below:

Smartphone screens are not the toy industry’s only technology challenge. Cocks uses artificial intelligence tools to generate storylines, art, and voices for his D&D characters and hails AI as “a great leveler for user-generated content.”

Current AI platforms are failing to reward creators for their work, “but I think that’s solvable,” he says, describing himself as “an AI bull” who believes the technology will extend the reach of Hasbro’s brands. That could include subscription services letting other Dungeon Masters enrich their D&D campaigns, or offerings to let parents customize Peppa Pig animations. “It’s supercharging fandom,” he says, “and I think that’s just net good for the brand.”


The D&D design team and others involved with D&D at Wizards of the Coast have repeatedly stood by a statement posted back in 2023 that said that D&D was made by humans for humans. The full, official stance on AI in D&D by the D&D team can be found below.

For 50 years, D&D has been built on the innovation, ingenuity, and hard work of talented people who sculpt a beautiful, creative game. That isn't changing. Our internal guidelines remain the same with regards to artificial intelligence tools: We require artists, writers, and creatives contributing to the D&D TTRPG to refrain from using AI generative tools to create final D&D products. We work with some of the most talented artists and creatives in the world, and we believe those people are what makes D&D great.
 

log in or register to remove this ad

Christian Hoffer

Christian Hoffer

Only tangentially related, but kind of amusing:
TL;DR: Some users of a particular code editing AI (ie, for writing software) are reporting that it occasionally refuses to write the code, and instead instructs users to learn how to do the work themselves.

"I'm sorry, Dave, I can't do that. If I always create the statblocks for you, you will never learn D&D mechanics for yourself."

.
 

log in or register to remove this ad

A new player can type exactly their question into one and get a specific response. They don't have to parse articles and videos to try to find one that addresses their specific concern.
This speaks to another gap we have and the only two solutions I know of are Reddit and StackExchange and who knows how representative those are.

A better way than asking an LLM is to ask people. We do it here all the time. D&D's discord server is actually not a horrible place except its so time-based that you don't know who's going to answer you. That's why the RPG stackexchange isn't so bad.

The issue with asking an LLM, it can often be wrong. People can be wrong too but you're more likely to get several GMs hashing it out and figuring out an answer than the LLM can translate.
 

I can see [hallucinations] have capacity to be useful in the circumstances you described - but currently as they're uncontrollable and unpredictable, they effectively represent infinite risk from a business/legal perspective - particularly in the era where people do actual play streams and something incorrect or defamatory gets published to the wider web etc. (And I fully agree, this is tech bros fault for pitching this as a universally useful and safe technology).

I get the whole "risk" angle here, but personally I don't care -- I'm already of the opinion this stuff shouldn't be publicly accessible, and the risk thing feels downstream of that.

My primary interest is in AGI and figuring out how to move us forward towards that. If there's business applications for it, neat I suppose, and the legal issues that come from current AI are primarily related to data collection practices, which admittedly are largely fueled by AI but are themselves a separate problem.

This is largely due to my brain "hallucinating" and always thinking AGI is several other things, so default GAI. This is probably because I don't have enough background to understand the cutting edge research, but enough to understand that tech bros that insist LLMs are the frontier of it are incredibly delusional.

Fair enough, I don't have any TLA competing in my brainspace for it so I go with AGI.

Don't get me wrong -- multiheaded attention has turned out to be kind of bonkers. That one paper a few years back where some kids at MIT imitated a tripartite synapse with transformer architecture and it improved the model is insanely important (a very weirdly-behaving brain structure at the cellular level where a particular cell is doing something its class of cell shouldn't be doing, and isn't predicted for in the neural calculus that AI is based on).

However, much like CNNs, this is a dead-end technology. I'm sure it will have a role to play, but the amount of data needed and energy expended to achieve lackluster results (ie, something that can't pass a hypothetical Voight-Kampff)? No, this isn't a winner. It's never going to be a winner. There's no mechanism in these models for self-awareness or intentionality to bootstrap themselves on to, and even if there were, there's no continuity of identity and no causal reasoning.

If you don't want to use technology fine. I could still vacuum my house, do dishes by hand, throw out my kindle, write physical letters and drop them in the mailbox. I also acknowledge that technology changes and sometimes it can be helpful. 🤷‍♂️

I understand why you say this, but I want to point out that this time really is different.

This isn't just about elfgames. Kids are using these things to write their papers, people are using them to "help" them do their jobs -- there is every indication people are slowly becoming reliant on these things, and that's a dangerous place to be, especially when they're in their current state.

There are gates you do not open. There are seals you do not breach. The techbros were fueled by greed when they released these things into the wild, and while I'm not going to go all doom and gloom about it, the world is almost assuredly a worse place for it.

Glue pizzas.
 

This speaks to another gap we have and the only two solutions I know of are Reddit and StackExchange and who knows how representative those are.

A better way than asking an LLM is to ask people. We do it here all the time. D&D's discord server is actually not a horrible place except its so time-based that you don't know who's going to answer you. That's why the RPG stackexchange isn't so bad.

The issue with asking an LLM, it can often be wrong.
That's the issue with asking people, as well. On questions such as the one I posted above, LLMs are generally pretty good in that you will get a response that reflects the general consensus, rather than one person's opinion.
People can be wrong too but you're more likely to get several GMs hashing it out and figuring out an answer than the LLM can translate.
I don't disagree, but I also don't know if this is particularly useful to a new player much of the time. LLMs are right there and the kids know how to use them.

Edit: I don't think it's terribly useful to debate whether this is a "gate that should never have been opened" or to predict how successful LLMs will be in the long run. The gate is open, and they are having a significant impact right now. I'm focused on figuring out how best to use them and how best to control them; we had a staff meeting on that issue two days ago.
 
Last edited:

This speaks to another gap we have and the only two solutions I know of are Reddit and StackExchange and who knows how representative those are.

A better way than asking an LLM is to ask people. We do it here all the time. D&D's discord server is actually not a horrible place except its so time-based that you don't know who's going to answer you. That's why the RPG stackexchange isn't so bad.

The issue with asking an LLM, it can often be wrong. People can be wrong too but you're more likely to get several GMs hashing it out and figuring out an answer than the LLM can translate.
Ask a rules question here on enworld and you'll get people confidently writing answers that are 100% wrong, followed by a 50 page debate where nobody changes their mind and that eventually veers off-topic into an argument about how WotC is destroying D&D.

Dealing with chatgpt and potential hallucinations seem simple in comparison, and alt least the AI is always polite.
 

...
There are gates you do not open. There are seals you do not breach. The techbros were fueled by greed when they released these things into the wild, and while I'm not going to go all doom and gloom about it, the world is almost assuredly a worse place for it.

...

The same has been said about almost every new technology ever invented. The long ago agricultural revolution and civilization as we know it was bad for people in many ways but I don't idealize the hunter-gatherer societies of our ancestors and I wouldn't want to go back to that. Very few, if any, revolutionary technologies are universally good or bad; there will always be benefits and drawbacks.

So sorry if I don't buy into the doom and gloom. AI in it's various forms is making incredible advances that are of tremendous value in some cases and in others it amounts to nothing more than an unwelcome and unnecessary increase in the use of electricity. Just like with every other advancement ever.
 

I'm focused on figuring out how best to use them and how best to control them; we had a staff meeting on that issue two days ago.

To me the obvious answer is "you don't."

But then, I'm not (particularly) interested in business applications and whatnot. That's not why I got into this field. If you want to keep figuring out how to use a thing that will instruct you to put glue in your pizzas -- that's certainly a choice you can make, I suppose.

AI in it's various forms is making incredible advances that are of tremendous value in some cases and in others it amounts to nothing more than an unwelcome and unnecessary increase in the use of electricity. Just like with every other advancement ever.

Hmm, yes, I guess we should just absolve the techbros of their tricks and lies selling snake oil, and celebrate "progress."

There's nothing wrong with people who know what they're doing using LLMs. But they shouldn't be as accessible as they are. That's the issue here, not the technology itself -- if the techbros weren't the clowns they are, we wouldn't even be having this conversation.
 

To me the obvious answer is "you don't."

But then, I'm not (particularly) interested in business applications and whatnot. That's not why I got into this field. If you want to keep figuring out how to use a thing that will instruct you to put glue in your pizzas -- that's certainly a choice you can make, I suppose.



Hmm, yes, I guess we should just absolve the techbros of their tricks and lies selling snake oil, and celebrate "progress."

There's nothing wrong with people who know what they're doing using LLMs. But they shouldn't be as accessible as they are. That's the issue here, not the technology itself -- if the techbros weren't the clowns they are, we wouldn't even be having this conversation.

If it's all snake oil then it will burn out like NFTs and it's not a big deal. Good news is, if you wouldn't find it useful you don't have to use it. Meanwhile it's useful for a lot of people.

Yes, sometimes people will ask for suggestions or generate a background. They'll chuckle, roll their eyes, maybe switch up the prompts. No one is going to take everything verbatim if it doesn't work for them. AI is not ready, may never be ready in the foreseeable future, to replace a human behind the DM screen. Doesn't mean it can't be useful for some people. Glue pizza strawmen don't change that the technology has far surpassed what we thought was possible even a few years ago.

If we do someday get a DM assistant and it makes some people more confident, more willing to be DMs, if it makes them better DMs? I don't see how that's a problem.
 

Here's what ChatGPT gave me:
This is utter garbage and of no use to anyone. It is, in fact, you asking people to parse through hours of content trying to find something vaguely useful when there are better alternatives available that have less risk, use less resources and are less actively harmful to them. Nobody wants to get the world's vaguest summary from a machine that is just stealing from people formatted in the most boring way possible.

Even AI knows that which is why Gemini links to pages that do a much better job.

Pages of mediocre text riddled with mistakes and misinformation are of no use to anyone in a world where there are infinite options for better information.

If they have the opportunity to have a person with 20 years experience give them a person to person tutorial, then I agree that that is clearly the best possible option. Directing someone to a YouTube channel is not the same thing.
I'm glad we agree that some sources are better than others, now I recommend you consider why you keep advocating newbies go to the worst possible source.

Because, again - statistically you're likely to find the best parts of whatever an LLM tells you on pretty much any random source on the topic - because it's going to present you a poorly drafted chimeria of them. The worst content though, will be terrible advice that is written to seem like it comes from professionals.

The average quality goes down, and the risk of terrible nonsense goes up.

The only people want to use it is because it's "fun" to see it pretend to be a person.

It does not, however, invalidate the point I was making: that one thing LLMs are good at is distilling a lot of information and coming up with simple, generic responses that reflect the general consensus, which is ideal for players who don't know where to start.
They don't do that at all. They do the opposite of that. LLMs excessively pad their responses and provide contradictory information because they aren't actually capable of process what a question is, just finding a bunch of stuff that appears around questions on the Internet - thus they can't edit it down to simple, generic responses.

You know who can? Literally every search engine on the Internet.

Edit: Example ( I asked it a typical newbie question, whether natural 1s always fail):
And it wrote a half page essay instead of "No. As per D20 Checks, the natural 1 rule only applies to attack rolls, to reflect the uncertainty of combat. Skill checks and saving throws are meant to reflect the character's personal ability." That would be a simple, generic answer that reflects the reality (not the concensus).

Instead it mixes and matches the information, creating a scenario where a failed skill check can make you drop your weapon (clearly an error from it confusing the house rule of 1 on a skill check with the house rule on critical fumbles in combat) and advocates that making players have this happen 5% of the time makes things "more dramatic" (thus more desireable to the newbie) rather than warn it can create frustration with players feeling their characters are incompetent and shouldn't try to do anything.

This means that if you have a newbie DM who takes this at face value, it will take more effort to untangle them from the nonsense that ChatGPT has presented - but more likely they're going to waste their time reading that, re-reading that, concluding (correctly) that its nonsense and then either make up their own house rule or go get actual information. In other words, the best outcome is you waste their time - the worst outcome is you drive them out of the hobby.

I'm a language teacher. I have serious concerns about the way that LLMs are already impacting students finding their own, unique voice.
I'm curious - when your students are presented with research information, real life examples substantiating the research etc do you expect them to pay attention to that or do you just ask them to check what ChatGPT says? Because I've linked people to numerous studies, reports etc and one thing I've noticed is the pro-AI crowd just happily make claims that are pre-emptively debunked and then triple down on them as they're proven wrong over and over.

What do you do when your students do that?

Also if you worry about people finding their own voice then you should absolutely not direct them to the garbage plaguirism machine that flattens everything and thus contribute to them having no stimulating or inspiring material.

Also how do you feel about students (and programmers, as I linked to), losing basic skills like checking source materials, working out how to summarize and study information?

LLMs are generally pretty good in that you will get a response that reflects the general consensus, rather than one person's opinion.
You're saying the general consensus is to use glue in pizza recipes, recommend you eat a set amount of rocks a day, would say a peregrine falcon is the fastest marine animal, and play chess like this. I cannot explain it to you any clearer than you do not have the first clue what LLMs do or do not do, you are attributing them with abilities they do not have and ignoring the actual problems they have even when they are clearly in the responses they give you.

The same has been said about almost every new technology ever invented.
Factually incorrect. Thousands of things are invented and completely ignored, the US issues 100,000s of patents every year and pretty much nobody notices them because they're things that make people's life better or don't impact them in any way.

Generative AI has created legal liability problems, cost fortunes in waste, made searching for actual information, resulted in economic upset as people lose their jobs because their employers fired actual experts and relied on spicy autocorrect, been the largest case of intellectual property theft in history and is accelerating climate change due to the energy and water requirements. All with no use case for the general public.

The long ago agricultural revolution and civilization as we know it was bad for people in many ways but I don't idealize the hunter-gatherer societies of our ancestors and I wouldn't want to go back to that.
Citation needed.

Very few, if any, revolutionary technologies are universally good or bad; there will always be benefits and drawbacks.
This is confirmation bias - very few of the technologies you see today are universally good or bad, because the bad ones get taken away and you don't have to deal with them. Also, how they're presented to the public is generally a major part of the poison.

Wikipedia has over 200 entries listing withdrawn drugs, including calomel (which was presented a panacea for about 1,000 years despite being mercury and hence giving you heavy metal poisoning, ulcers, etc) and thalidomide (which is infamous for causing 1,000s of birth deformities, but still used when the risk assessment warrants it). And well.

1741936547186.png

(For the unitiated, Tesla have unique problems among EVs as a combination with their tendency to catch fire much more often than others, their lack of safety features built around it, and that Tesla won't share proprietry information with firefighters so they are unique hard to extinguish. EV tech is fine, this implementation - not so much)

A large part in how many of these harmful things (be they drugs, recreational radiation, FTX, whatever) cause so much harm is exactly this combination of malicious optimism and malicious indifference. AI advocates present a page of gibberish and claim it's brilliant, then when it's pointed out its garbage they claim it could be brilliant and (more importantly) it will be brilliant. An obvious snakeoil pitch which is vividly obvious now as its essentially the same arguments you see for Decentraland (which is crashing and burning hard right now).

And that's assuming they even have a real product, fake products do immense harm.

1741936619908.jpeg


This is why you should actually take the time to properly assess something before rushing in to cheer it on. Not inventing weird stories that confuse Caveman Science Fiction (a truly great comic) with real history, and pretending that nobody's hurt by things are demonstrably doing massive damage is free.
 
Last edited:

LLMs aren't so much snake oil as they are real but limited (and unethically sourced and horribly wastedul) products with legitimate applications being sold by snake oil salespeople and deluded technocultists who see more humanity in Siri than their neighbors.

Technology is frequently harmful without sufficient education and precautions, and sometimes easy just leads to atrophy. We're a species that's used known toxins as toys and decorations - Wisdom is our dump stat.

Individuals choosing to use GenAI for their games is whatever, but a sufficient scale of adoption is going to increase stagnation.
 

Related Articles

Remove ads

Remove ads

Top