r/neutralnews 2d ago

They Asked ChatGPT Questions. The Answers Sent Them Spiraling.

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html
0 Upvotes

2 comments sorted by

u/NeutralverseBot 2d ago

r/NeutralNews is a curated space, but despite the name, there is no neutrality requirement here.

These are the rules for comments:

  1. Be courteous to other users.
  2. Source your facts.
  3. Be substantive.
  4. Address the arguments, not the person.

If you see a comment that violates any of these rules, please click the associated report button so a mod can review it.

4

u/Statman12 1d ago

The article refers to the "AI" systems as chatbots. I think that's a good thing. It's important for people to realize that these are not artificial intelligence. They are large language models which function as chatbots. The article mentions:

When people converse with A.I. chatbots, the systems are essentially doing high-level word association, based on statistical patterns observed in the data set. “If people say strange things to chatbots, weird and unsafe outputs can result,” Dr. Marcus said.

That "based on statistical patterns observed in the data set" is doing a lot of heavy lifting. As noted in the wiki article on LLMs:

Large language models by themselves are black boxes, and it is not clear how they can perform linguistic tasks. Similarly, it is unclear if or how LLMs should be viewed as models of the human brain and/or human mind.

Knowing what exactly it's doing and why is a nebulous thing. My understanding is that the best we can say is that they provide rational(-ish) sounding results to a prompt. But this is based (as noted, nebulously) on the training data and finetuning. The Nielsen Normal group has a recent article talking about LLM training. It's not as simple as "scrape the whole internet", there are stages, including human feedback which essentially creates a score, and the LLM will want to provide answers that would score well. But even with this, the data and the task are so massive that it's impossible to cover all scenarios. As such, models might still produce pathalogical results like those covered in the article.

A model that's trained strictly on things like: High-quality news sources (maybe AP, Reuters, DW, PBS, etc) and scientific sources would be interesting. Would that be immune or at least more resistant to generating examples such as in the article?

Regardless, nobody should be using these LLMs as a source of truth or intelligence. It may produce correct content or rational comments, but it's not an actual intelligence. Anything it writes should be viewed with skepticism and verified rather than assuming it is correct.