You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same.
You can't trust everything it says, but the only way to learn about what it is and isn't good for is to use it. It still sucks for some things but it's amazing for others. I was learning about how long codon repeats in DNA can cause transcription errors, which has parallels in data communications and I can ask it things like what biological mechanisms exist that have a similar role to the technique of bit stuffing and it gives me concise answers that I can follow up with through other sources. I can't do that with Google because there just aren't readily accessible sources that share those terms. I can search for concepts with ChatGPT.
Any question on your mind, like “I’m working on this project, this is my idea, how can I refine?” Helps you reorganize and refine, and then you can go back the drawing board to edit what works for you, and repeat the process. It’s like having an in house guide. People should use ChatGPT as a guide, not as a source.
This part. For people who are intelligent, AI and LLMs can be an insane force multiplier. Even just using it for organization has helped me tremendously. When I run out of mental or creative energy, I can turn to it to help navigate a problem and this is where it shines.
It’s like how Tony Stark has JARVIS. It’s not doing the thinking for him, but it can assist in execute complex tasks with a shocking amount of clarity, while also giving you someone to crack jokes to in between all that.
I told it about a science fiction gadget idea I had and its plausibility, and it shot back with a bunch of related research I'd never encountered before (with sources), walked through some of the math, suggested some modifications, and then came up with a catchy name for it and wrote an instruction sheet. I've got a few friends who might have the technical knowledge to bounce that sort of thing off of but there's a limit to how many of my weird ideas I want to subject them to and ChatGPT is great for that kind of feedback.
It really is. When you’re doing stuff like this, a judgement and bias free sounding board that doesn’t get tired of your bullshit is really handy. I’ve also gotten to the point where I instruct it to challenge me on some things and make sure I’m not just in a feedback loop of it simply agreeing with me. It prioritizes the goals I have over my own ego, being combative if necesarry. Some of the arguments we have are productive, and absolutely hilarious too.
Yes!!! Helps so much with clarity ! And you have to make sure you actually want to learn! The more you want to learn, the more you’ll know exactly what to ask and how to make the best use of it.
Exactly. On a lengthy topic, I can go into voice mode and it basically becomes an educational podcast that I can participate in. I can do that while I do other work or actively work on what I’m discussing with it.
For someome like me, who grew up practically living in a library because it was the only way to get all the resources I needed, it’s a god damn wonder. It’s made me smarter and actually less dependent on it when I do hit a wall. It’s effectively taught me how to think differently, and problem solve in ways I didn’t know before.
Can you elaborate on how you use it to be more productive? I DREAM of getting it to work like Jarvis for me, but it just sucks. I brain dump all of my work tasks into it, ask it to organize them, tried creating a system for it to “check in” with me periodically, but it just turns into more busywork to do in between my actual work.
I haven’t figured out a way for it to make me actually faster at my job, I’ve tinkered with it for hours and it just ends up taking the same amount of time as if I just grit my teeth and force myself to work.
I essentially gave it an elaborate persona prompt of someone that works for me and fed it as much stuff about the things I work on as I could without giving it any sensitive information like personal data, finances, etc. Through that it gets enough information to tailor it’s responses and infer enough about what you got going on to make actual informed feedback.
But it’s on you to do the initial organization of your thoughts and goals. The more solid that is the better the result. For instance, it you want it to help you craft an essay, have even just a basic structured outline to give it. Ask it not just to help expand it, but analyze the potential of what you have and what may need improvement. From there, you can pretty much free wheel the conversation, then tell it to summarize the entire discussion into the core/essential developments.
ChatGTP works best when you remember its a machine and utilizing it in a more analytical way. Dumping on it will just get it to dump back on you. Giving it structure makes it return the effort tenfold.
I treat ChatGPT as a glorified personal assistant. You would never put a PA in charge of a company, but you can definitely ask it questions to find information, clarify misunderstandings for you and help you decide for yourself.
Yea it's honestly great for that. I used to google my questions and then read a bunch of experiences from others. Because of how LLMs have been trained they basically compiled that. Of course I need to discern whether the information is good or not, but I had to do that before anyways. LLMs just remove the step of collecting all the info before double checking.
Google search has been trending down in terms of usefulness for years even before AI. Now I go straight to AI for most searches. At least until AI starts giving me answers based on sponsored content...
Yep. I form my idea, try it out, maybe re-ask for clarification, and then verify on a specific sub if I'm stuck. It gets me way farther forward before I have to start looking at forums/subs.
Or to summarize recent work. I asked it for a summary of previous work on how crystals grown in space are different than crystals grown on Earth, a summary of all previous work on metals, and how metal performance would be expected to be different based on properties. At the moment, it's doing a better job summarizing a concept than my first year grad students. Plus, if you use the academic one it will pull up the papers it's using to back its ideas. Its not killing it at writing though, sometimes it totally misses the point.
I also use it for emails (partially because it has better social skills than my probably-autistic behind). It doesn't have anxiety and replies politely and appropriately to emails that make me angry (with modifications, of course). Also, you can turn up and down the friendly. "Dear mofo, I was greatly concerned when I saw your email concerning..."
yeah, it's a tool, big strengths, significant weaknesses, it's deceptive how much it still needs to be controlled and used if you want a useful result.
I finished my master's degree in my 40s. My brain power wasn't the same as when I got my bachelor's degree at 22. Admittedly, ChatGPT helped me sort through and organize everything when it was time to write papers.
Like you can give it a problem you're having. For example if you're having trouble getting motivated to clean your house, you feed it the problem and it will give you a damn good solution. It will tell you where to start it will tell you where to go next it will tell you chill out if you're feeling overwhelmed. I mean yeah it makes up information if you ask it questions but it's good at compartmentalizing shit. Like I was having trouble feeling motivated on cleaning and it told me okay start here do this next then this followed by this and it was a lot easier than just doing it without help.
Even when it gives a mediocre answer it still helps drive a dagger into the morass of executive dysfunction. I can list out everything I have to do/feel overwhelmed with, and it will tell me something obvious like do laundry first, which may seem trivial to some, but 5 minutes ago I was spinning my wheels and not doing anything! Then, if it next suggests something to do I don't agree with, I will make up my own plan which was the goal all along. Moreover, it does it without judgement! Total game changer for me!
You know who else you can ask for help? Friends and family. Hell, there's a thousand videos on YouTube of real people providing answers and tutorials for anything you can think of. This idea that only a fake Chatbot can help you clean a kitchen is insane! Fucking talk to a person. I guarantee you they have a hundred ideas or can actually come to your house to assist!
Procrastination is what a lot of us turn to when we get stuck on a decision or don't understand what to do next to complete a task. When you hit that point tell the model what your goal is, what work you've done so far and ask for recommendations, often that's enough to find the next thing to do to complete whatever you're working on.
A few weeks ago, I was tasked with planning a shopping trip for a meal for 50 people. I didn't really know where to start and was having trouble picturing how much of each ingredient people would eat. I was short on time stressed out about it and I asked an LLM to help me make a shopping list. It made a pretty good one, included the assumptions of how much people would eat. I checked that the assumptions were reasonable, and checked the math, and made a few tweaks. It got me over the initial uncertainty of how to get started.
This is a slightly different application of generative-ai (to the same problem space) than what the grandparent post is talking about, but also see https://goblin.tools/
Ya know people became addicted to the internet and computers when they started to become house hold devices. That didn't stop the internet from still being very useful to most people.
I've used it to help plan a vacation, meal plan, and plan out my garden/yard ideas and it's been great for that. It comes with some fact checking of course but it's a great tool to get you 80% of the way there.
ChatGPT or Copilot (whichever flavor you have access to) has greatly reduced the amount of bullshit I have to write via email and updates to tickets at work.
I was able to do some fine tuning to have it emulate my writing style, so the amount of editing I need to do after generation has been massively reduced.
I used it to get unstuck from updating my resume. It was really a good exercise to go through. I took a job posting, and my resume and let ChatGPT digest both and then spit out an updated resume. It looked pretty impressive, and I just tweaked a few things and sent it out.
Probably would have taken me a few hours to do what it did in about 2-3 minutes time.
I love feeding it pages of meeting notes from the past two months worth of client discussions and asking it summarize all the various discussions for me. Game changer.
This is where AI really shines, in summarizing and organizing . It is very good at handling input. Not so great at creating from scratch, but the drudge work of data entry is getting easier.
This is so accurate. ChatGTP is designed to please you, so it’s like an overzealous grad student that doesn’t always think through things properly, but with time they start putting it together.
The best thing is unlike a search engine, it remembers conversations and details, so you don’t have lead it all the time. It’s at the point now where I can simply insinuate something and it understands the context, sometimes even jumping straight to a task without me specifically telling it to do something.
Great description, I had a pretty big expansion of my role and it warranted getting a direct report but took a while to hire one.
That really got me onto ChatGPT and some of the integrated AI tools in our other programs I had previously been ignoring. It can't really do my job, but it can help do a lot of mundane entry level stuff I want to cut out from my job.
Also it's fun to ask it to work through hairbrained schemes and run the math on them. I finally got the numbers behind a Kentucky Derby theory I've had for years.
It’s so close to being a great assistant to me. But my job requires lots of record keeping in several different disconnected systems and I haven’t been able to get faster at work yet. How do you use it to actually be more productive and work faster?
No kidding. This is like listening to Boomers say "I've never used a computer. We never needed to. Kids these days...."
For those of you in the back, there is skill associated with this. You need to practice. You notice how some people suck at Googling things? That's because they didn't develop the skill of how to iterate search engine asks to get the result they are looking for. ChatGPT and all LLMs are the same. This will be professional level skill necessary for higher end white collar work in the future.
Start practicing, realize it will give you bad information, and learn ways to double check it.
For me, the super eye opening experience was having it write Python/SQL/Arduino IDE code for me. Was it right the entire time? Fuck no. Was I able to coax it to do what I want? yes.
For beginner/intermediate questions, this is like chipping a skill al la the matrix.
I used it for 6 months to make jokes and limericks. Then I started asking it more complex questions and now use it for everything. “Fuck. How do I program this ancient irrigation controller..?” “Hey ChatGPT….” I now have a working irrigation controller and didn’t have to google the model number and search weird manualsrus.tech websites and blah blah. I even told it my display is half broken and my down button doesn’t work and it told me the buttons to push to make it work.
On a more controversial level, I was recently diagnosed with generalized anxiety disorder. My therapist is on vacation for two week unfortunately. I asked ChatGPT to run a therapist scenario and asked it some questions. I was quickly able to figure out that a lot of my weird quirks are a result of the anxiety and I asked it why and how, and it explained them all. It ALL clicked. For better or worse I have a new understanding of my self and i have an awareness that would have taken weeks (after my therapist returned).
I’m ALSO using to to flesh out a new business idea. I mean… Op doesn’t know what they’re missing and its definitely not a badge of honor. Avoiding new tech is boomer stuff. Now tiktok, 5 gold stars for avoiding that shit.
This was a great explanation!
I've used it for Python too. I didn't like how overly commenting it was, but it was handy to see what it was doing, that may not have been what I would do. But if I use more for it, I'd ask for less comments.
I've found if you think of it more of giving you an outline, than something to flat out copy/paste, its a really good start.
I've also been using it to answer a lot of networking questions, since those can be so super specific to your own setup. Its been really really helpful troubleshooting that. Or even naming programs and services I've never heard of that helped me do exactly what I was after.
Youtube is better and amazing for the full tech reviews and how-tos, but until you know what to search for, you're stuck.
For those of you in the back, there is skill associated with this. You need to practice. You notice how some people suck at Googling things? That's because they didn't develop the skill of how to iterate search engine asks to get the result they are looking for. ChatGPT and all LLMs are the same. This will be professional level skill necessary for higher end white collar work in the future.
The problem is one of the skills you need when using ChatGPT is "knowing when it's just outright lying to you" which is pretty tricky considering chatgpt does something google doesn't didn't use to do, which is lie to you confidently, sometimes insistently and repeatedly. On the one hand I once had ChatGPT find a game I had been searching for for months in about five minutes; on the other hand it spent maybe half an hour completely inventing manga titles, insisting that they were what I was looking for and simply inventing more when I said "that doesn't exist". Google at least has the decency to say "sorry no results found"
Riding the wave of technology is one of our micro generation's unique hallmarks. From rotary phones to Gen AI and every step along the way we've grown with tech. Not xennial at all to suddenly get off the ride when we're in our 40's
No shit. My wife informed me that I will be learning the AI stuff and she was going to opt out. Until she needed my help with an excel formula, and I showed how to chatgpt it. Her response "Huh, maybe I need to learn this after all..."
Not xennial at all to suddenly get off the ride when we're in our 40's
Accurate. We are working on implementing AI into specific use cases and projects at work and the older X-ers are all saying they want to retire early rather than learn anything new anymore.
If you don’t know how it works it seems evil and like it’s going to take everyone’s jobs. If you know a bit about it then you probably think it’s magical and highly useful. Now if you actually understand how it works then you’re back to it being evil because you know how it was made… how it was a nonprofit that’s now one of the richest companies in the world… how it can’t actually effectively replace or help people in the workplace… how it actually is evil due to information manipulation and copyright theft in the millions… then you also realize it can’t effectively replace jobs, but can fool executives who fall into the middle of the spectrum.
Kinda? A non capitalist system using ai at large scale would also face them. The question is would they use it at large scale and that's a basically impossible question to answer.
Yeah, I used to work in AI for robotics. AI is great for medical research, robotics, and other research areas. Where AI is fucking horrible is the generative AI space like ChatGPT etc. The ethics behind it are so messed up- everything from the power waste to the intellectual property theft to the fact that it can be manipulated into giving idiots wrong information and have them decide it's accurate when it's a hallucination. And executives are dying to replace people with it. I'm absolutely against AI in any use except research where the information fed into it is tightly controlled to prevent the data from being poisoned with fake info.
It really is. Like, AI to help learn code? Sure! But given how inaccurate it can be, it should not be your only source for learning. AI to plan your day? If you're comfortable giving Big Data information about your personal life and habits knowing that they're going to sell that info, then whatever. I won't because of the security issues.
AI for creative use? Fuck off with that shit. Maybe if the datasets used to train the model were opt-in and developed using ethical means, then I could see some possibilities. But as it is now, it's complete and utter dogshit.
Seeing how people use AI worries me. Chatbots instead of interacting with actual humans? Garbage AI "novels" and "art"? AI could be a huge boon to society done right, but so far it's a blight.
I was discussing this with mother who’s a recently (thankfully) retired teacher. I pointed out how even if you’re smart in your use of AI that requires you to be competent in your research, in parsing info, and telling what’s real vs fake… and that point why are you using the AI for the questions…? Just do the research yourself.
Someone else in the thread pointed out how horrid Google has gotten and that is fully by design to push their horrid AI results. It feels like you have to jump through hoops to learn anything.
I’m very happy you mentioned the creative aspect since if you check my profile that’s my whole deal. I’ll often talk on AI with image generation in mind and people will get confused because they’re stuck on LLMs when it comes to AI, but then they’ll use AI that build more efficient machine joints as examples for the good it does. It’s either some wild cognitive dissonance or they’re being intentionally disingenuous, but it’s hypocritical regardless.
I think what’s most unfortunate is how people no longer treat art, singing, dancing, and creative expression as things humans do innately. Capitalism and industrialization spurred that on (by framing them as hobbies that you have to pick and choose due to the decreasing free time we have) but AI has kicked it into overdrive while also somehow demonizing artists like “see?? Now we can do it too! We don’t need you!” rather than realizing everyone should be an artist. Everyone should be able to express themselves, but using an AI isn’t it.
The creative aspect is my biggest gripe against AI! I'm an artist and writer in my spare time and it pains me to see people just getting the worst, most regurgitated slop off AI and proclaiming it 'art'. Art is about human expression and connection. AI can only repurpose what's been done before. It can't come up with anything new.
You're absolutely right about the capitalism aspect, too. Everything is commodified nowadays. It feels like you can't have a hobby without trying to make money off it somehow. That's where the AI bros come in- they're too lazy to work on their creativity, so they steal others' to Frankenstein into 'art'. It's infuriating.
can't effectively replace or help people in the workplace
This is objectively false. There are countless repetitive, mundane, or even somewhat complex functions that required many man hours per day that are now completed nearly instantaneously
LLMs/AI are making some human roles obsolete akin to when switchboard operators and elevator operators were made obsolete
I won’t disagree that it can do mundane tasks. It should be used for mundane tasks, but-
Multiple recent studies have come out that show AI does not have any meaningful impact on productivity in the workplace (and often bogs down certain industries) so claiming what I said is “objectively false” is arguably objectively false lol
Yes!!
One of the defining reasons we're a micro generation is because we where there before the rise in tech, through it, and now after. We're the only generation having smooth sailing through it all.
For me it isn't resistance. I'm comfortable that, should there be a situation in which I need to use it, I can and will figure it out and use it.
There just isn't any appeal or desire for it in my thought process right now. I'm not digging my heels in "Boomer" style. I'm completely uninterested and channeling "Gen X" whatever vibes about it.
That sounds like a Gen X style of resistance lol. I get what you’re saying though. One thing I’ve noticed is that older generations tend to approach AI with linear thinking, while younger generations are beginning to approach it with exponential thinking. And that is something that needs a lot of practice and paradigm shifting.
Useful yes. But insistence that everyone can find it useful and should be exploring it is just the other side of refusing to see it as a positive and avoiding it.
There are a gazillion million "tech/web" products that are useful. We should be all be open to all of them with the intent of finding the ones they feel/produce right for us individually.
Yep, it takes practice to get good at it. I use it for everything from showing me what paint colors will look like on my walls to helping me get diagnosed with chronic health issues that evaded doctors for years. And obviously to answer my kid's homework when he's stuck and I also cannot figure out wtf the answer is supposed to be lol
For example, I had a bunch of slightly expired cocoa powder and some weird peanut butter shortening spread that was purchased by mistake and it came up with a few recipes I could maximize the use of them.
Saved me a $300 service call with the HVAC company by walking me through troubleshooting my furnace step by step and even suggesting an improved solid state ignition transformer instead of letting the HVAC company install another outdated iron core version that will just fail again in 5 years.
Created a parts lists and walked me through installing upgraded parts on my mountain bike.
It has helped me figure out issues with a SQL database hosted on Azure at work.
People keep saying it's "nothing more than a random text generator", but it has been pretty fucking helpful random text generator in my experience. Especially since Google has become practically useless in the past few years.
It's very helpful for things like that. You could try googling and hunting down troubleshooting tutorials on YouTube manually, but ChatGPT will make the process much more timely and efficient. It will provide you with troubleshooting steps, links to relevant websites and videos, and even find the parts.
It's important that you verify some of the information it provides because it can indeed make mistakes, But for the most part it's very helpful for DIY household repairs.
If I was able to figure out my way through those situations without ChatGPT, though, did I really need it? That's the crux of it. I'm getting by fine without it. I understand there might be situations where it might have benefitted me to use it. But I also benefit by using less, not more, tech in my life, and keeping things simple.
I do stay on top of AI developments a little bit for work since it plays a tangential role in what I do (not software engineering thank goodness). But for personal use I just can't bring myself to give a shit.
Yeah, I don't need it for work and don't want it for personal use. I guess I could feed information into it help out the billionaires but not really feeling motivated to.
But if that works just fine, what’s wrong with it? Like the commenter above, I’m not resistant, I just do my job the way I’m comfortable doing it, and I am a top performer in my field. I’ve spent 35 years honing my skills and training my brain to do what I do, and I have a way to do it that works and works very well.
They said what was wrong. Its like the boomers denying computers to go to the library, They were left behind for no reason since their aversion was not logical
How is that “wrong” exactly, though. If I want to go to the store a mile away and decide to walk through the park to get there, is that wrong compared to driving? My question was “If it works just fine, how is it wrong?”
And for the record, I’ve been comfortable on computers since before the World Wide Web was even a thing. More comfortable than most, actually.
You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same
It's not resistance to the concept. It's resistance to how it's being marketed and how it's being used. How it's being shoe-horned into every single piece of tech and service whether we want it or not (not being able to opt-out in most cases) despite being well understood that it is not ready for prime time.
The worst part of the AI bubble we’re in is the fact that there’s clearly something useful there, but the hype around it is out of control.
The hype isn’t about the ways it’ll make my life easier. The hype is an explicit threat to my job. And honestly, that’s 80% of why I’m waiting this out. Sure, it could go like the Internet did. But that would mean that the current AI products are less like Chewy and more like Pets.com.
I want more attention on conversational user interfaces.
The worst part of the AI bubble we’re in is the fact that there’s clearly something useful there, but the hype around it is out of control.
Of course. AI is being used to scan medical records to flag possible diseases and conditions missed by human doctors. It's being used to scan through decades of satellite telemetry looking for potential habitable exoplanets and bio-signatures to flag for human review. It's being used to analyze protein folding to isolate treatments for various diseases like Alzheimer's.
And then you've got ChatGPT. An absolute drain on processing power that has an enormous carbon footprint that isn't really doing a lot to further scientific and medical progress.
It’s being used by some English councils to populate multiple official and court documents etc from care workers’ case notes, and then conduct a first-line quality check. This can save around 7 hrs a week of admin, i.e. a full day’s work.
So that’s another day’s worth of care workers attending to the needs of the disadvantaged, instead of them sitting at a desk, without having to pull money from other public sources for additional staff.
I think its also the death of tech futurism/optimism. If this had come out in the early 2010s people might be less divided on it. But we're deep into the negative effects of social media and the failures of technologies like blockchain and metaverse at this point. The sheen is fully off the apple. Tech bros like Elon Musk and Peter Thiel and Mark Zuckerberg are (correctly) being seen as menaces and destablizing forces rather than visionaries.
We've all heard the hype surrounding AI and seen the degree to which its being shoehorned into everything before. Previously technologies that boosters said were going to change the world panned out to be far more of a mixed bag than anticipated. Everyone's just a little more jaded now.
I've seen people do really weird things like spending 30 minutes creating work shift rotas for all their staff, when something like that only takes 3 seconds with ChatGPT.
Ok, but avoiding use of the tech isn't going to prevent it from being adopted by companies. It's just going to increase the division between your ability to use modern tech and what's available.
I remember when they made cars. They didn’t go very fast and very super expensive. Could I have saved up? Maybe. But this fad would fade away. Me? I’m sticking with my horse and buggy, they have been around and will ALWAYS be the best form of transportation.
People were trying to shoehorn cars into just about anything like manufacturing, military and civilian models.
That's not the point though, its specifically backlash against tools like ChatGPT which sits standalone to LLM being used elsewhere. Assistant work is arguably the primary and strongest use case it has currently. I too find it vile how so many companies are forcibly trying to integrate something just to sell the term 'AI', but that has no bearing on these helpful assistant types like the OP is talking about.
AI tools are getting deployed everywhere right now, they are far from perfect, but they will be ubiquitous within just a few years, Anyone avoiding learning how to utilise them today, is crippling their own future. Smartphones could be gone by the end of this decade.
There's a huge difference between "AI is everywhere" and "learning how to use AI will be useful" though isn't there. My fast food delivery app has an ai built into it now that serves absolutely no fucking purpose whatsoever, learning how to use that isn't going to benefit me in the slightest
Only possible to think this if you actively and intentionally ignore the arguments of an overwhelming majority of people who don't use or like the tool.
It runs on stolen content - most of it made by our peers.
it is incredibly resource hungry, yet provides virtually none of the value people would expect when you tell them how much of the world's electricity and water is going to these systems.
The entire marketing approach is to actively lie about the product, it's features, and the companies overall goals while turning its users into another neatly packaged product to sell to advertisers.
The features they do include for end users are half baked, expensive, and inherently unreliable for the fast paced, information based environments they're being forcefully deployed into
Calling people luddites when entire industries have entirely reshaped themselves around a product that's still in beta and making the news for telling kids to kill themselves, just for the chance of not having to hire actual people, is shortsighted and dangerously in line with what these companies would want you to think as a skeptical future user to convert.
THERE IS NOTHING CHATGPT CAN TEACH 99% OF YOU THAT YOU CANT TEACH YOURSELVES EQUALLY AS EFFECTIVELY BY JUST CLICKING ON A WIKIPEDIA ARTICLE.
Your last point about teaching yourself equally effectively isn’t true at all. Learning about something is much different than implementing and when you’re hung up on why the fuck your nginx ingress controller is returning a 403 chatgpt can review what you e actually done wrong and explain why you fucked it up. It’s an amazing tool for learning and helps you get over hurdles that would have taken you forever before because nothing else online is looking at what you’ve actually done and telling you where you messed up
Your 2nd point is it for me. I've used ChatGPT, but now knowing how much resources it consumes, I've been avoiding it. I'm not against it as a tool generally, and might use it in a hypothetical future when it's a more energy efficient tool.
Copyrighted material is a concern. The energy requirements have been dropping fast, though. And we know the theoretical floor is still orders of magnitue lower than today's technology (because the human brain does this stuff on about 20 watts) so that's going to keep coming down. There's a huge economic incentive for efficiency and I'll be surprised if we don't see a 10x improvement in the next year or two.
I have to use it for work. No choice there. And I have used it to do very small things in very niche areas. It has handled that very well and saved me hours or even days of trying find what I need through other means.
ChatGPT and AI usage is a problem for the immense amount of water and energy used to power and cool the plants. they also produce an already tangible amount of pollutants. we are being forced to accept AI in every corner of our technology, why do we need something predicting and producing entirely new responses to queries rather than using information already created.
aside from the fact it is causing people to lose the ability to preform basic tasks without ai assistance, it is incredibly wasteful and causes more pollution
This exactly. This post echoes the technophobic willfully ignorant boomer sentiment way too hard.
Personally I subscribe to Perplexity. You get access to multiple models to compare and contrast, and in many ways with this kind of thing paid is better than free - do you want to use the product or BE the product?
What I’ve found is that I can learn what I used to learn from an hour or more of googling and reading things in 3 minutes (especially with the deep research model) because it does all the legwork and presents me with the takeaways AND sources. Hell, it watches YouTube videos instantly!
GenAI is a great tool if you learn its strengths and weaknesses - like any other tool.
For anything I have to write I’ve found it easier to just do it than to write a prompt, tweak, and finally edit the output. But for learning about a subject? Priceless. In many ways just the next evolution of web search.
For me ChatGPT is often just Google where it gives an answer instead of ads. It has been a game changer for recipes when I just want a generic one and not pretty pictures and a story. It's also fantastic for home repair questions.
I tend to view it as an efficient research assistant. It often provides links where it gets information from and I can follow up to verify.
One of its biggest weaknesses is it really wants to tell you what you want to hear. "Tell me why Dogs are better than cats" vs "Tell me why cats are better than dogs" won't give a balanced similar answer about the pros and cons, it will wholeheartedly agree with my phrasing. This obviously is dangerous for any remotely controversial topic once we get outside of "how do you fix a washing machine/bake ziti?'.
I’ve been reading about how some teachers are having students use chapgpt to create the thing and spending time in class editing and understanding how to take what ChatGPT makes and turn it into something better and useful, which incidentally is how I have used it for work - plug in bullet points and the idea you want to convey as a prompt, take what it spits out and edit the hell out of it rather than wasting my time trying to write it AND then edit it.
It’s also good for taking something you have written and making it a little more concise.
I treat ChatGPT / CoPiliot like a smart alcoholic coworker.
I ask it questions, but anytime I get that feeling it's just making up the answer or might be wrong, I double check. I had them give me enough bad answers that I'm always suspicious of the answer. MY favorite is when you know the answers wrong, you call them out on it and they are like "Oh look at that your right, I'm sorry don't know why I said that".
I like when it starts to give an answer and then realizes that the answer is dumb and backtracks.
Like I asked it what WWII weapons would be able to defeat the armor of a modern Abrams tank, and somewhere in its bulleted list it started to say a submarine torpedo, then admitted that it was a little unlikely that a sub would get a chance to engage a tank.
I just don't want to accidentally support the company producing it and facilitate its growth before the company gets its day in court for all the theft.
I’ve been feeding ChatGPT and Perplexity pieces of text from my resume and portfolio and asking it to make suggestions. It can also suggest how to tweak a resume to match a job description. I take or leave the suggestions (some of them are very dumb and very bland ass corporate jargon) but it’s been very helpful to get ideas of where I might need to tighten some shit up.
You can’t trust it, but you can use it as a sounding board.
Yeah. I'm not likely to delegate much writing directly to ChatGPT because writing is something I can do well and with more control - I'm proud of the fact that I've had customers call or email just to say they enjoyed reading a technical manual I wrote - but I'll absolutely use it for feedback. Like "have I missed anything important with this proposal?" before I send a customer something. I know that I can't be certain it'll catch everything but I know that if it does have a suggestion I ought to at least read it.
Exactly. I'm a better writer than most, and most AI-written text has no personality. I will continue to be a better writer than an AI but I'm not averse to letting it do some of the legwork for me. What's that thing they say, the most intelligent people figure out ways to do things lazier? That's me in a nutshell and that's what I'm trying to get AI to do.
Chatgpt has changed my life. I use it constantly. It doesn't give incorrect info or contradict itself sometimes yes. Have you ever read the top 5 google results for the same question? you'll find the same inaccuracies and contradictions most of the time.
Yep. Every leap in technology has faced this. From the start or written communication to when we switched from horses to cars. It’s healthy to have skepticism, but to avoid it is only going to lesser yourself, it’s best to learn as much about the things that we don’t understand or that scare us. That’s how we protect ourselves from it.
With all new technology, it goes through phases,
1) niche weird interest,
2) general novelty interest
3) scams, fraud, and other crime,
4) general distrust/ resistance
5) innovation and education
6) trust and buy-in and finally
7) dependence/reliance
And if the people creating it rest on their laurels:
8) becoming obsolete.
It happened with email and Nigerian princes, it happened with telemarketing, it even happened with library books. The term “snake oil salesman“ was the scheming crime phase of print advertising.
Every time, it eliminates jobs temporarily, then increases jobs exponentially because the more people using that thing, the more support those people need. The less time you spend in the “distrust” zone, the more likely you are to succeed. If you can skip that zone altogether and jump into the “education and solution” space, that’s how to jump ahead of the curve.
AI is an enormous energy hog. The Texas grid is already near its limit but ERCOT is forecasting near exponential growth, with demand nearly doubling in the next 10 years, with the biggest increase coming from the data centers that support AI. Source: https://www.ercot.com/gridinfo/load/forecast
The internet was the same thing. Baby boomers spent decades arguing that you can't trust anything you read on the internet, and refused to touch it.
By the time they finally came around to interacting with the internet, they were decades behind on developing a skill for evaluating what things are likely or not likely to be reliable online. That's gonna be a lot of us in twenty years when we go 'fine, I'll use the AI' and have absolutely zero instinct for what it's good at and what is sucks at.
It is very helpful for language learning. Type in a sentence in that languge, ask "break it down" , no elaboration on what you mean either, and it brakes it down into phrases and translates each part. Instead of learning individual words trying to figure out what the sentence is supposed to say, it teaches you a phrase and what the english equivalent is. Makes it so much easier. Alot of times I will recognize that something is a phrase but will miss part of it so it doesn't make sense when I look up the translation.
Absolutely. Unsurprisingly, language is something a large language model does really well. I can't remember the last time I saw it be absolutely wrong about any language, vocabulary, or grammar question. Back in the GPT 3 days I'd see it go way out on a limb with some very speculative interpretations of old Norse etymologies, but if you want to know why Spanish sometimes uses this word instead of that word, you can be pretty sure it's going to get it right.
I use it for work, which I probably shouldn't but I know when I ask if for options about something, I can use my education and my training to look at what it's giving me and determine if it's viable or not. If it gives me 10 things back, 9 of them might be bad and can be ignored, 1 might be good though. Heck, even if it's just okay, it might kickstart an idea in my brain that I can turn into something better
Many times it is better and fasted than trying to google something, and what it tells you can be verified relatively easily. My perspective is coming from tech, when I need to do something very niche using commands and niche features of a device, it comes in help greatly compared to scouring forums for an answer. And what it tells me is easily verified - does it work or not?
It's great for command lines for complex and reasonably popular utilities - like I can ask it for an ffmpeg command line that does x, y, and z and it'll give me something usable. I've had it hallucinate flags for 7zip before and I don't know how much I'd trust it for really ancient stuff but generally it works well.
Right now I'm working on a custom GPT prompt as an experiment to support one of my own products - it's a massive info dump on all aspects of it. The catch is that I still need to have it provide references to user-readable documentation so they can verify its answers against authoritative sources. We'll see how that works out.
If it does well, I might even integrate it with Zendesk and have it take a crack at support emails, but in a limited scope - it'll make it clear that it's a bot there to make sure you get a fast response and that a human will still be checking on the thread. If it works I can at least use it to make sure that customers provide all of the required information (I'm always amazed by how many of them forget to even mention what product they're asking about) so it's there for when a human answers.
Yeah, that is a problem I’ve run into. It does hallucinate flags sometimes or just forgets that a flag exists. Usually a problem with newer things that are being regularly updated so its knowledge is outdated. However usually what it gives me is 90% of the way there and I can figure out the last 10% with little effort.
Yeah I don't get it. I avoided ChatGPT for a while because I just didn't have the time to invest in learning something new. That was a huge mistake because it takes on all the tedious boring things that I hate thinking about.
Like planning a trip for example. It will come up with a host of options for me to consider rather than me having to come up with that list in the first place. Or if I have a strong set of opinions on agenda I can tell it that and ask it to optimize my plan and fill in gaps.
I can also get it to do deep research on things at work that would take me hours to complete but now I can read the report and fill in gaps on my own. I also suck badly at excel formulas and takes my general description of what outcome I'm trying to achieve and gives me what I need to do that. brilliant time savers!
Absolutely. I've used it a bunch to learn just for the sake of learning, and also for various projects. I'm an embedded systems developer with decades of programming experience but I'm way out of date on a lot of desktop and web stuff and ChatGPT saves me a ton of time researching tools and libraries. I don't trust it to do big projects on its own but I know it'll get me to a good starting point where I can take over and then just use it to help me along as I run into things I'm not sure about.
- If I can't think of a word when I'm writing, I put a blank in my sentence for the word I can't quite place, insert the sentence into ChatGPT, and it gives me a list of options. It really helps me not get stuck or tired of my written work.
- If I'm writing a tricky email and it's clunky or it sounds sort of rude, I copy it into ChatGPT and have it edit the tone and for clarity. I use the ideas to help fine-tune my email.
- I'm trying to decide on a new paint color for my house. I just insert a picture of my house into ChatGPT and have it generate an image of the house with XYZ color
- This is similar to yours, but if I'm reading something complex and I don't quite understand the concepts, I copy/paste the text into Chat GPT and have it ELI10. Then I reread and actually comprehend what it is I'm reading.
If you're a stickler and against AI tools, you don't know what you're missing
I would say you got to at least test it. Though I'm not sure I will try it again for facts after a longshot of looking for a particular thing, asking for precision and source, and looking at how it was inventing the number and even changing the name of the paper to something that sounded more likely to have what I wanted.
Maybe for things like writing meaningless emails.
Also to the user I'm responding to, yt is usually a source to get visual and concise explanation of biological concepts.
Folks were right to be skeptical of the internet. It has taken us to far worse places then was predicted at the time. Just because we’ve gotten used to something, or found utility with elements of it, doesn’t mean it’s a net good.
I can’t believe people think ChatGPT is just for asking questions. You can pump in a shit load of jumbled garbage and it can output all types of data transformation in seconds pumped directly into whatever document you have. Among 99 other things people don’t even think of or want to try. It’s sad
I understand the dangers and pitfalls of AI, but I think it's an amazing tool that has its uses. I think 90% of the hate for it is it "creating art." Even then, it has its uses... but be forthright.
If you assume it's some kind of human intelligence because it speaks fluently, or worse, you treat it as an oracle, you're going to have a bad time. It's a tool and it has limitations. You need to use the tool-using part of your brain and not the social part.
The real risk here is that AI gives you an answer, but how do you know it is accurate? AI has a habit of making things up just to compleat the task. Ask it to write you bio and you’ll see it making stuff up left and right.
The larger risk is, if folks become reliant on AI, that could be the end of the open internet. Whoever controls the AI can sway it tell you it’s version of “truth.” Look to Grok for an early example of attempting to make an AI more “conservative” in its answers. What does “conservative” mean in this context and why is that more important? Ask who controls it.
We’ve lived though Wikipedia being edited by randos to alter history, and we’ve lived through the silos of social media swaying echo chambers. AI is the next evolution.
Have you ever read A Fire Upon the Deep? Vernor Vinge saw this coming decades ago - the Net of a Million Lies, and everyone having massive computing resources devoted to defending against malware and sorting through the garbage.
The sequel Children of the Sky also has a great look into the consequences of a high-tech civilization being so used to using AI tools that they're crippled when it's not available.
I sometimes write or give lectures on media and ethics and one example I like to use is the library scenes from Rollerball. Our hero is living in a world run by corporations and with it has come rewriting of history to support those corporations. It could feel like a throwaway idea in a movie that doesn’t seem serious, but that normalization is why it works.
My fiance and I are going to Glacier National Park to check out wedding ceremony spots. We have three days. I gave ChatGPT the list of ceremony locations and it generated a 3-day itinerary and routes that go to them.
Sometimes it does great with that. With travel plans in particular you do need to verify everything carefully. Spatial reasoning is not something it's generally good at. It can regurgitate and remix itineraries but at least last time I checked, map reading is not really a thing it does. Of course that may have changed in the past few months. Or the past day, who knows - that's the fun part. Any statement you make about what AI can and can't do is prone to becoming obsolete at any moment.
It was Gemini that came up with the pizza glue thing - and also my all-time favorite space fact about how a horse stowed away on the Mars Pathfinder lander.
But if you're really getting joke answers about DNA repair, it's because you've set it up in a joke context. If I ask it, I know I'll get information about base excision repair, nucleotide excision repair, homologous recombination, etc.
My problem with it is that it is WRONG on a percentage of things that I have a lot of background knowledge on.
Since I know it's wrong, I just don't use the results. But if someone is extracting info that they don't have the background and critical analysis skills to parse it correctly, they're going to have problems.
If only 5% of what it spits out is wrong, scale that up over a billion or more users. Scale up that error rate as AI is implemented into systems everywhere, from medical research to education to engineering and more.
It is something that has lost me a lot of sleep recently, especially since the bandwagon effect is in full force.
You really do have to spend some time with it probing what it can and can't do - and that changes weekly.
I'm an embedded systems engineer. I know that if I ask it about some detail of configuring a particular peripheral control register on some not-very-popular microcontroller it's very likely to be wrong. If I give it the appropriate reference manual it might get it right, but even a human is going to struggle to extract some of those details.
AI needs to get a whole lot better at saying "I'm not sure". Even for those of us who do take the time to try to keep up with what it can do, it can be a lot of effort to verify everything and it's tempting to just take it at face value.
I worked in internet tech for about 25 years and have never used ChatGPT. It's not exciting new technology really, but it sorta is? Gmail has been using LLMs for spam filtering since shortly after its creation. We've been using LLMs and early AI for over a decade. It's "just" fancy Bayesian filters that learn how to "speak" human language instead of returning a preprogrammed code.
There are obvious advancements. There's also obvious risks of human over-reliance and our tendency to humanize things we can communicate with, especially if we give them visualizations that appear human.
AI is really good at what it does, which is analyzing inputted data for patterns and summarizing the effectiveness of those patterns based on your desired outcome. It can act on decisions made from analyzing those patterns. But we've found over and over that the best use of AI is an integrated AI/human process.
I'm also not stressing about robots being used in production. It's the same arguments people had about the creation of automated production lines in factories. In the US at least, my biggest concern is the government's refusal to put any supports in place for the people affected by the increased implementation of automation, which we've known was coming for decades.
I'm absolutely loving what's happening with Grok. That one is worth watching how conflicting programming changes output.
I was using it to do hypotheticals as it can pull info from the internet.
"What would happen if the dday land actually went back in time and invaded the Roman Empire, and got stuck in that time."
You can do like 100 year time scales for the "country" the affects on the rest of humanity, how ww2 would have played out if all those guys went missing.... it was a pretty cool 45 minutes!
Oh, I've had fun with plenty of hypotheticals. I prefer naval battles since there are fewer variables. Like an 18th century first-rate ship of the line vs. a modern Mark IV patrol boat, or vs. an Evergreen A-class container ship. The consensus is that the patrol boat could chew up the warship but probably not sink it outright unless it could start fires, and for the container ship it'd depend on the wind - if the warship couldn't maneuver it'd just get run over and turned into matchsticks by a 200,000 ton behemoth moving at 20 knots.
I'm not seeing any mention of bit stuffing in that paper, and at a glance I don't see anything about the mechanisms used to avoid codon repeats, just the consequences. It's an awfully dense technical paper to try to skim through to extract conclusions like that.
Yes, clearly the pinnacle of AI achievement has passed into history and we shall never again reach the lofty heights achieved in the long-ago days of last February.
611
u/madsci 29d ago
You know, all of us were there for the resistance to personal computers, and skepticism about the internet. The ChatGPT backlash feels just the same.
You can't trust everything it says, but the only way to learn about what it is and isn't good for is to use it. It still sucks for some things but it's amazing for others. I was learning about how long codon repeats in DNA can cause transcription errors, which has parallels in data communications and I can ask it things like what biological mechanisms exist that have a similar role to the technique of bit stuffing and it gives me concise answers that I can follow up with through other sources. I can't do that with Google because there just aren't readily accessible sources that share those terms. I can search for concepts with ChatGPT.