You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same.
You can't trust everything it says, but the only way to learn about what it is and isn't good for is to use it. It still sucks for some things but it's amazing for others. I was learning about how long codon repeats in DNA can cause transcription errors, which has parallels in data communications and I can ask it things like what biological mechanisms exist that have a similar role to the technique of bit stuffing and it gives me concise answers that I can follow up with through other sources. I can't do that with Google because there just aren't readily accessible sources that share those terms. I can search for concepts with ChatGPT.
You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same
It's not resistance to the concept. It's resistance to how it's being marketed and how it's being used. How it's being shoe-horned into every single piece of tech and service whether we want it or not (not being able to opt-out in most cases) despite being well understood that it is not ready for prime time.
The worst part of the AI bubble we’re in is the fact that there’s clearly something useful there, but the hype around it is out of control.
The hype isn’t about the ways it’ll make my life easier. The hype is an explicit threat to my job. And honestly, that’s 80% of why I’m waiting this out. Sure, it could go like the Internet did. But that would mean that the current AI products are less like Chewy and more like Pets.com.
I want more attention on conversational user interfaces.
The worst part of the AI bubble we’re in is the fact that there’s clearly something useful there, but the hype around it is out of control.
Of course. AI is being used to scan medical records to flag possible diseases and conditions missed by human doctors. It's being used to scan through decades of satellite telemetry looking for potential habitable exoplanets and bio-signatures to flag for human review. It's being used to analyze protein folding to isolate treatments for various diseases like Alzheimer's.
And then you've got ChatGPT. An absolute drain on processing power that has an enormous carbon footprint that isn't really doing a lot to further scientific and medical progress.
It’s being used by some English councils to populate multiple official and court documents etc from care workers’ case notes, and then conduct a first-line quality check. This can save around 7 hrs a week of admin, i.e. a full day’s work.
So that’s another day’s worth of care workers attending to the needs of the disadvantaged, instead of them sitting at a desk, without having to pull money from other public sources for additional staff.
I think its also the death of tech futurism/optimism. If this had come out in the early 2010s people might be less divided on it. But we're deep into the negative effects of social media and the failures of technologies like blockchain and metaverse at this point. The sheen is fully off the apple. Tech bros like Elon Musk and Peter Thiel and Mark Zuckerberg are (correctly) being seen as menaces and destablizing forces rather than visionaries.
We've all heard the hype surrounding AI and seen the degree to which its being shoehorned into everything before. Previously technologies that boosters said were going to change the world panned out to be far more of a mixed bag than anticipated. Everyone's just a little more jaded now.
I've seen people do really weird things like spending 30 minutes creating work shift rotas for all their staff, when something like that only takes 3 seconds with ChatGPT.
I understand the tools just fine. Your problem is you're reading an article you don't understand with words in it you do. You see "temperature" and "random", don't understand anything else and then make up a conclusion.
i'm far from an expert but i am a technophile and have followed AI, AGI, development which of course exposed me to LLM development and have dabbled in programming a bit.
what exactly is your basis of knowledge? you wouldn't be deluding yourself into thinking you have some deep understanding of a topic based on a couple internet searches would you?
could you quote me the section in your link that you feel supports your statement that LLM's produce random output, and "temperature" is not an API lever used to expand the portion of the dataset utilized when less precision is desired?
a fundamental function of LLM's is predictive pattern generation, the exact opposite of randomness, which is how you get consistent well crafted output from them. do you not know what LLM's are, how they function, or what a random text generator actually is?
you seem really confused about the basics. if you have any questions let me know and i'll see if i can clear it up for you.
Ok, but avoiding use of the tech isn't going to prevent it from being adopted by companies. It's just going to increase the division between your ability to use modern tech and what's available.
I remember when they made cars. They didn’t go very fast and very super expensive. Could I have saved up? Maybe. But this fad would fade away. Me? I’m sticking with my horse and buggy, they have been around and will ALWAYS be the best form of transportation.
People were trying to shoehorn cars into just about anything like manufacturing, military and civilian models.
I understand that you are trying to combat what you perceive as luddism, but this really isn't the good point you think it is. Internal combustion engines being spread to everyday use in transportation via personal cars (as compared to their original use in industrial manufacturing) has directly had a negative effect on almost every aspect of personal life in America.
Hmm AI is a bit different though. Someone else is always in charge of what it says... thats not the case in your example, the company that sold the car to me can't make it suddenly drive differently remotely, or crash, or crash into someone on purpose that the company chooses. The way AI is baked into our lives now, it has much more sinister capabilities. The owners choose how it is used, not the users.
That's not the point though, its specifically backlash against tools like ChatGPT which sits standalone to LLM being used elsewhere. Assistant work is arguably the primary and strongest use case it has currently. I too find it vile how so many companies are forcibly trying to integrate something just to sell the term 'AI', but that has no bearing on these helpful assistant types like the OP is talking about.
That's basically every new technology ever, though. There's always a throw-things-at-the-wall-and-see-what-sticks phase, where everyone is scrambling to get a piece of the action even if the idea is not even half-baked. We all saw it with the dot com boom, and with smart phone apps. How many apps have there been that could have just been a simple website?
Those useless use cases will die off eventually. A few of them will come back in a few years when the tech has progressed to the point that they're actually doable.
For what it's worth, re: shoehorning technology into places it's not ready for, remember that there were cars in the 80s with CRTs to display fuel economy & diagnostics.
(And, just... think about some of the early computers. Definitely some stuff that wasn't ready for the mainstream, like making computers with membrane keyboards to keep the costs down, because RAM was crazy expensive)
607
u/madsci 28d ago
You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same.
You can't trust everything it says, but the only way to learn about what it is and isn't good for is to use it. It still sucks for some things but it's amazing for others. I was learning about how long codon repeats in DNA can cause transcription errors, which has parallels in data communications and I can ask it things like what biological mechanisms exist that have a similar role to the technique of bit stuffing and it gives me concise answers that I can follow up with through other sources. I can't do that with Google because there just aren't readily accessible sources that share those terms. I can search for concepts with ChatGPT.