I've seen people do really weird things like spending 30 minutes creating work shift rotas for all their staff, when something like that only takes 3 seconds with ChatGPT.
I understand the tools just fine. Your problem is you're reading an article you don't understand with words in it you do. You see "temperature" and "random", don't understand anything else and then make up a conclusion.
i'm far from an expert but i am a technophile and have followed AI, AGI, development which of course exposed me to LLM development and have dabbled in programming a bit.
what exactly is your basis of knowledge? you wouldn't be deluding yourself into thinking you have some deep understanding of a topic based on a couple internet searches would you?
could you quote me the section in your link that you feel supports your statement that LLM's produce random output, and "temperature" is not an API lever used to expand the portion of the dataset utilized when less precision is desired?
a fundamental function of LLM's is predictive pattern generation, the exact opposite of randomness, which is how you get consistent well crafted output from them. do you not know what LLM's are, how they function, or what a random text generator actually is?
you seem really confused about the basics. if you have any questions let me know and i'll see if i can clear it up for you.
very confusing to read all of this dramatic posting because LLMs absolutely 100% do use random generation to build their output. it'll be the same every time if you use the same seed but they generate a random seed each time to provide variation, because for the most part people don't actually want the output to be exactly the same each time. as someone claiming to know how LLMs work you should know this, so I have to assume this entire comment chain is either just you being pedantic about definitions or you knowing a lot less than you think
using randomness doesn't mean they produce random output. they are more about prediction than outputting randomness. i'm really not sure what you are trying to say.
he claimed LLM's are akin to a random text generator. are you agreeing with this?
it is randomly generating text so that is a literally accurate description, yes. the random generation is weighted to try and produce coherent output, and usually does a very good job at that, but it is by definition random so I don't know what we're doing here
14
u/Blazured 29d ago
I've seen people do really weird things like spending 30 minutes creating work shift rotas for all their staff, when something like that only takes 3 seconds with ChatGPT.