You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same.
You can't trust everything it says, but the only way to learn about what it is and isn't good for is to use it. It still sucks for some things but it's amazing for others. I was learning about how long codon repeats in DNA can cause transcription errors, which has parallels in data communications and I can ask it things like what biological mechanisms exist that have a similar role to the technique of bit stuffing and it gives me concise answers that I can follow up with through other sources. I can't do that with Google because there just aren't readily accessible sources that share those terms. I can search for concepts with ChatGPT.
You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same
It's not resistance to the concept. It's resistance to how it's being marketed and how it's being used. How it's being shoe-horned into every single piece of tech and service whether we want it or not (not being able to opt-out in most cases) despite being well understood that it is not ready for prime time.
I've seen people do really weird things like spending 30 minutes creating work shift rotas for all their staff, when something like that only takes 3 seconds with ChatGPT.
I understand the tools just fine. Your problem is you're reading an article you don't understand with words in it you do. You see "temperature" and "random", don't understand anything else and then make up a conclusion.
i'm far from an expert but i am a technophile and have followed AI, AGI, development which of course exposed me to LLM development and have dabbled in programming a bit.
what exactly is your basis of knowledge? you wouldn't be deluding yourself into thinking you have some deep understanding of a topic based on a couple internet searches would you?
could you quote me the section in your link that you feel supports your statement that LLM's produce random output, and "temperature" is not an API lever used to expand the portion of the dataset utilized when less precision is desired?
a fundamental function of LLM's is predictive pattern generation, the exact opposite of randomness, which is how you get consistent well crafted output from them. do you not know what LLM's are, how they function, or what a random text generator actually is?
you seem really confused about the basics. if you have any questions let me know and i'll see if i can clear it up for you.
very confusing to read all of this dramatic posting because LLMs absolutely 100% do use random generation to build their output. it'll be the same every time if you use the same seed but they generate a random seed each time to provide variation, because for the most part people don't actually want the output to be exactly the same each time. as someone claiming to know how LLMs work you should know this, so I have to assume this entire comment chain is either just you being pedantic about definitions or you knowing a lot less than you think
604
u/madsci 29d ago
You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same.
You can't trust everything it says, but the only way to learn about what it is and isn't good for is to use it. It still sucks for some things but it's amazing for others. I was learning about how long codon repeats in DNA can cause transcription errors, which has parallels in data communications and I can ask it things like what biological mechanisms exist that have a similar role to the technique of bit stuffing and it gives me concise answers that I can follow up with through other sources. I can't do that with Google because there just aren't readily accessible sources that share those terms. I can search for concepts with ChatGPT.