Here's what ChatGPT gave me:
This is utter garbage and of no use to anyone. It is, in fact, you asking people to parse through hours of content trying to find something vaguely useful when there are better alternatives available that have less risk, use less resources and are less actively harmful to them. Nobody wants to get the world's vaguest summary from a machine that is just stealing from people formatted in the most boring way possible.
Even AI knows that which is why Gemini links to
pages that do a much better job.
Pages of mediocre text riddled with mistakes and misinformation are of no use to anyone in a world where there are infinite options for better information.
If they have the opportunity to have a person with 20 years experience give them a person to person tutorial, then I agree that that is clearly the best possible option. Directing someone to a YouTube channel is not the same thing.
I'm glad we agree that some sources are better than others, now I recommend you consider why you keep advocating newbies go to the worst possible source.
Because, again - statistically you're likely to find the best parts of whatever an LLM tells you on pretty much any random source on the topic - because it's going to present you a poorly drafted chimeria of them. The worst content though, will be terrible advice that is written to seem like it comes from professionals.
The average quality goes down, and the risk of terrible nonsense goes up.
The only people want to use it is because it's "fun" to see it pretend to be a person.
It does not, however, invalidate the point I was making: that one thing LLMs are good at is distilling a lot of information and coming up with simple, generic responses that reflect the general consensus, which is ideal for players who don't know where to start.
They don't do that at all. They do the opposite of that. LLMs excessively pad their responses and provide contradictory information because they aren't actually capable of process what a question is, just finding a bunch of stuff that appears around questions on the Internet - thus they can't edit it down to simple, generic responses.
You know who can? Literally every search engine on the Internet.
Edit: Example ( I asked it a typical newbie question, whether natural 1s always fail):
And it wrote a half page essay instead of "No. As per D20 Checks, the natural 1 rule only applies to attack rolls, to reflect the uncertainty of combat. Skill checks and saving throws are meant to reflect the character's personal ability." That would be a simple, generic answer that reflects the reality (not the concensus).
Instead it mixes and matches the information, creating a scenario where a failed skill check can make you drop your weapon (clearly an error from it confusing the house rule of 1 on a skill check with the house rule on critical fumbles in combat) and advocates that making players have this happen 5% of the time makes things "more dramatic" (thus more desireable to the newbie) rather than warn it can create frustration with players feeling their characters are incompetent and shouldn't try to do anything.
This means that if you have a newbie DM who takes this at face value, it will take more effort to untangle them from the nonsense that ChatGPT has presented - but more likely they're going to waste their time reading that, re-reading that, concluding (correctly) that its nonsense and then either make up their own house rule or go get actual information. In other words, the best outcome is you waste their time - the worst outcome is you drive them out of the hobby.
I'm a language teacher. I have serious concerns about the way that LLMs are already impacting students finding their own, unique voice.
I'm curious - when your students are presented with research information, real life examples substantiating the research etc do you expect them to pay attention to that or do you just ask them to check what ChatGPT says? Because I've linked people to numerous studies, reports etc and one thing I've noticed is the pro-AI crowd just happily make claims that are pre-emptively debunked and then triple down on them as they're proven wrong over and over.
What do you do when your students do that?
Also if you worry about people finding their own voice then you should absolutely not direct them to the garbage plaguirism machine that flattens everything and thus contribute to them having no stimulating or inspiring material.
Also how do you feel about students (and programmers, as I linked to), losing basic skills like checking source materials, working out how to summarize and study information?
LLMs are generally pretty good in that you will get a response that reflects the general consensus, rather than one person's opinion.
You're saying the general consensus is to use
glue in pizza recipes, recommend
you eat a set amount of rocks a day, would say
a peregrine falcon is the fastest marine animal, and
play chess like this. I cannot explain it to you any clearer than you do not have the first clue what LLMs do or do not do, you are attributing them with abilities they do not have and ignoring the actual problems they have even when they are clearly in the responses they give you.
The same has been said about almost every new technology ever invented.
Factually incorrect. Thousands of things are invented and completely ignored, the US issues 100,000s of patents every year and pretty much nobody notices them because they're things that make people's life better or don't impact them in any way.
Generative AI has created legal liability problems, cost fortunes in waste, made searching for actual information, resulted in economic upset as people lose their jobs because their employers fired actual experts and relied on spicy autocorrect, been the largest case of intellectual property theft in history and is accelerating climate change due to the energy and water requirements. All with no use case for the general public.
The long ago agricultural revolution and civilization as we know it was bad for people in many ways but I don't idealize the hunter-gatherer societies of our ancestors and I wouldn't want to go back to that.
Citation needed.
Very few, if any, revolutionary technologies are universally good or bad; there will always be benefits and drawbacks.
This is confirmation bias - very few of the technologies you see today are universally good or bad, because the bad ones get taken away and you don't have to deal with them. Also, how they're presented to the public is generally a major part of the poison.
Wikipedia has
over 200 entries listing withdrawn drugs, including
calomel (which was presented a panacea for about 1,000 years despite being mercury and hence giving you heavy metal poisoning, ulcers, etc) and
thalidomide (which is infamous for causing 1,000s of birth deformities, but still used when the risk assessment warrants it). And well.
(For the unitiated, Tesla have unique problems among EVs as a combination with their tendency to catch fire much more often than others, their lack of safety features built around it, and that Tesla won't share proprietry information with firefighters so they are unique hard to extinguish. EV tech is fine, this implementation - not so much)
A large part in how many of these harmful things (be they drugs, recreational radiation, FTX, whatever) cause so much harm is exactly this combination of
malicious optimism and malicious indifference. AI advocates present a page of gibberish and claim it's brilliant, then when it's pointed out its garbage they claim it could be brilliant and (more importantly) it will be brilliant. An obvious snakeoil pitch which is vividly obvious now as its essentially the same arguments you see for
Decentraland (which is crashing and burning hard right now).
And that's assuming they even have a real product, fake products do immense harm.
This is why you should actually take the time to properly assess something before rushing in to cheer it on. Not inventing weird stories that confuse
Caveman Science Fiction (a truly great comic) with real history, and pretending that nobody's hurt by things are demonstrably doing massive damage is free.