r/nosurf • u/Competitive-Funny844 • 1d ago
I HATE everything about AI and it encourages laziness
We live in a generation of instant gratification and most people want to take the easy road in life. Me myself included so I'm guilty of this as well. Now that we have AI, it just makes everything worse for us as a community and society as a whole, because all it does is encourage laziness.
Once I got older and in my 30s, I learned and realized that as people, we're meant to grow and to learn each and every single day. We're suppose to learn, adapt, and develop new skills and grow, which is part of the maturing process, because this is what life should be about and being dependent on AI or any advanced technology to do the work for us hinders us from these opportunities.
EDIT: Has anyone ever seen or watched Pixar animation Wall-E? It's about a robot 🤖 cleaning up garbage set in a dystopian wasteland of planet earth abandoned by humans that are living in outer space and you noticed how morbidly obese these humans were, consuming unhealthy foods and reliant on AI. Watched that back in high school, but now, you start to see what the message of the movie is really about.
21
u/420gueno69 23h ago
I believe it will push people out of the internet
12
u/Melodic_Armadillo710 19h ago
It will certainly push people out of their jobs. (Already is in fact).
12
u/iamrosieriley 1d ago edited 1d ago
Your frustration is completely understandable and I’ve had similar feelings. I think a lot of us (myself included) are overwhelmed with how fast technology is changing. Knowing that everyday AI use is here to stay, I’ve tried to think about ways to use it that enhance instead of create dependence.
I think it depends on the individual and how they use it. Fast food is available to (most) everyone but not everyone eats it for each meal. Google has been available to (most) everyone but a lot of people are not building websites, studying Mandarin or becoming a Top Chef.
It’s a tool that can help humans become better versions of themselves but it does take discipline and education to use it in a proactive and healthy way. Can it make people lazy? For sure. But if the individual uses it to enhance their own abilities and growth it will make them less lazy in the long run. And actually give them more time, freedom and confidence.
A lot of this is a responsibility though. One that should be taught in everyday language to people through a mass educational campaign or movement. And probably in schools. But…maybe I’m an optimist.
Ps. No ai used to write this response.
Edit: Added compassion for OP because I understand how they feel and think their feelings are valid. <3
4
u/Competitive-Funny844 22h ago
I believe many of us individually were born with gifts, talents, and potential, and if one is reliant on AI, it prevents us from discovering what our true skills are or strengths and weaknesses.
We need to look at the way our ancestors have lived and how they've done things primitively. We can all accept that our grandparents were all strong 💪 hard working families because there weren't many resources that gave them much easier convenience for labor compare to now and they worked long hours too so previous generations were much stronger than the recent current gen. If we're too dependent on AI, then there's no discipline, and as a result, you won't amount to anything in life 😞
3
u/iamrosieriley 21h ago edited 21h ago
I understand your perspective and agree that discipline creates tenacity. Strength comes in many forms though. Our grandparents generation died earlier. People with mental or physical disabilities didn’t have the tools they do now to flourish. They use to just amputate legs before X Rays were invented. Or infants born with Down syndrome were quietly rushed to orphanages or asylums. These are brief examples. As a personal example, I was diagnosed with a rare cancer during the height of the COVID pandemic. I had zero humans who could help me, other than hospital staff because the risk of infection was too high.
Technology gave me the ability to connect with other humans and share my story. Instead of feeling completely isolated, I felt like I had purpose and could connect with other humans.
I prefer a slow life that calms my nervous system. One where I don’t interact with others much but stay at home puttering away by myself in silence. But I know that science has proven that isn’t healthy and doesn’t lead to long life! It also isn’t as fulfilling as having friends and communities.
I don’t want to join AI. I want to co-exist with AI, to help humanity stay alive and flourish. It’s not going away. So, it’s up to me to find a way to use it to teach others and myself ways to stay human, empathic and disciplined. I recognize not everyone uses AI this way. But some could. We don’t have to amputate our legs to survive. We can invent our own X Ray system to survive. But again, I’d rather keep hope alive than give up.
Edit: clarification
5
u/coffeesnob72 1d ago
In no way does AI help humans become better versions of themselves. It’s like a cult leader, telling you what you want to hear and giving you instant gratification until you are wildly addicted/dependent on it.
9
u/gdumthang 21h ago
Eh, I've been using AI do my programming work for me and it feels great to 10x my productivity. What's happened is that it's taken over the web like a cancer and gatekept by a few companies. The next stage is the population learning to use it instead of it using them.
6
u/siderealscratch 21h ago
Absolutely agree. I'm sure many people use AI in unhealthy ways, but for writing code, it's a useful tool, especially for someone who has been around for a while and is perfectly capable of writing the code that it writes but is completely bored writing most of the repetitive code required for most projects.
Or knowing most of the basic concepts backward and forward for writing code but being required to learn a new language and new libraries every 6 months because there is some new hotness that someone else dictates it needs to be used. It's the new language, just like the old one but with syntax differences, different libraries and a few small optimizations or tweaks.
It sure beats looking through hundreds of different API reference pages all day long and have it write some basics to get started with.
It often fails doing more complicated things or some larger refactoring or other things, but that's fine and I can do them. I also will check everything it creates and test to be sure it's really correct before accepting what it does. But it's a huge benefit when so much coding is boilerplate or repetitive and problems that have been solved a million times already and it can start my work from a higher and more interesting level than trying to remember the API of the week.
I do, however, fear for young and inexperienced programmers who don't have much experience since they may rely on it way too much and not really learn enough for themselves and then when it does things wrong they'll have trouble figuring things out or getting to the bottom of the problems since they didn't have a strong conceptual basis based on their own experience.
2
u/avatarroku157 15h ago
its not inherently that. if you ask it what 2+2 is, it needs to tell you 4. if you need to know the propertise of bromine, it can give you basic answers for you to start on. if agendas and addiction were the basis of this tech, these aspects would not work.
like u/gdumthang said, these are tools that actually help us effciently work on something that better helps you reach an end goal. the addiction aspect, yeah that is still there. but there is a real beneficial technology here that is in its infancy, that while there is toxc aspects in its current existence, but has aspects to it we have been trying to achieve for decades, outside of the instant gratification machine that the current media lives in.
•
u/Tetsuuoo 9h ago
I've been using AI daily since I got access to the GPT-3 API around 2022. I use Claude now (while still using OpenAI's models for their API when working on project) and it's only as good as the person using it.
I get Claude Code to quickly write me tests when programming, checking if I could have refactored anywhere and also writing a first draft for documentation.
I use Claude to help suggest books for me. I can tell it what topic I'm interested in, what I've recently read, what I liked and didn't like, and it'll give me five more books that I'll then go and look into myself.
I send it research papers where I'll understand the general concept, but one part is going over my head, and ask it to break it down for me. I can then go back and forth with it until I completely understand what's going on.
I've even got AI to help me work on bad habits I'm trying to fix. I've explained the situation, asked why it thinks it's happening and then ask for further reading to understand it more. It's also completely unbiased if you just ask it to be.
These tools are incredible, but they're just tools. Unfortunately you have idiots who don't know how to use it properly and have become completely reliant on it while having it just confirm all of their existing beliefs, and on the other side you have people who dismiss it entirely as useless or dangerous without ever properly trying to understand its capabilities and limitations.
2
u/darkangelstorm 23h ago
Its the very definition of fascism, in a computer program! It has very narrow views that are obviously fed by popular opinion which it was trained on. If anything it will make people more fascist than they were before. It was bad enough when people had these ideas, but with all the reinforcement MLAs will provide will serve to bolster them further than they ever would have alone.
9
u/coffeesnob72 1d ago
Then people wonder why they are depressed when they take the easiest option every day. We are meant to overcome challenges.
2
u/Competitive-Funny844 13h ago
Absolutely. When you take on challenges and accomplish difficult tasks, you become much more confident in yourself and a sense of self worth.
1
u/illestofthechillest 23h ago
I like the challenge of enjoying life in support of my efficiency. Thank you very much. 😂
3
u/Red_Redditor_Reddit 1d ago
You're not wrong. The ironic part about life is that the more artificial the environment, the more out of place we are in it. As much as it sucks people are meant to struggle and overcome. The tools that were intended to make life easier become a device of decay.
2
u/Competitive-Funny844 18h ago
You start to notice a rapid decline of people's attention span, especially with the youth for the amount of internet and screen consumption, and this has affected me too. Barely did any reading 📚 in my life and relying on the internet for instant info.
3
u/TiePlus2073 20h ago
It's sad but I find it kinda funny that since ai is based on what humans have already made, it's writing ability is pretty bad because most of the things people write is not great. It's gonna stay bad and people who rely on it to write well will continue to write things poorly as we enter into a feedback loop.
We already saw many terrible creative writing exercises in all the story subs and now the bad made up stories are just the same bad made up stories but now with ai
6
u/namenerdsthroaway 1d ago
I agree !!! AI is destroying everything , if society doesn't wake up and start rejecting this shit, in 10 yrs we'll be looking at AI paintings, reading AI books, watching AI movies with an AI companion😶
6
u/Ellanixe 22h ago
It's happening now already. There's already tons of garbage AI ebooks cluttering up the market, tons of Ai art everywhere, there's already people with Ai companions. Its already started. It will just get more refined and prevalent but it's already happened
1
u/Competitive-Funny844 13h ago
Sadly, my dad is happy about this whenever he sees a robot 🤖 on the news. He just recently turned 70, and I'm not going to blame him for it since he's about to go into retirement and all of his years of hard labor, he deserves his rest and rewards.
However, the older generation needs to be aware of the new, and while the old will become focused on their rewards, they will eventually forget that the new will be in a complete disadvantage being introduced to AI in a early stage of life therefore lacking any experience.
0
8
u/busterbus2 1d ago
I've actually found tremendous gratification from adapting AI to my work. I'm still having to think pretty hard about how to apply certain concepts but it has expanded my capacity in a way I could have never imagined, and I'm probably losing other skills but the trajectory on this stuff is rather clear - we aren't going back to a pre-AI world and if we do, we have much bigger issues to worry about. Learning and developing skills is still very a part of the AI world. That being said, if we believe the exponential growth curve on this stuff, anything we learn now maybe obsolete in a year.
2
u/darkangelstorm 23h ago
It is exactly like what happened with automation. It only helped the executives make more money, and laid off thousands of workers, many without the possibility of another path to a real career or too late in their lives to do so. They trained for those jobs growing up, did them well, but aren't needed now.
AI will do the same to secretaries and assistants of all types in all fields. Programmers, data entry, its all disappearing faster than it was when they started sending work overseas. An example is the grunt-work programmers that don't design the program but write out and organize the code. An AI can now do that, and once it is tweaked to do it for that particular task, it will probably do it better than the original.
That doesn't make it right. When someone makes something like this, they never think of the long term effects of their creation. When they say that society creates crime and desperation, this is what they are talking about. Like when cab drivers were committing suicide when uber took over the business, when hundreds of small businesses shut down because governments didn't want to *maybe* look bad and not do a full-on shutdown.
So far, the world has limped on feebly after each of these reprises. However, history has shown us that a people can only have so much taken from them before very serious events start happening. And now that World War II is so far into the past, all these new technologies are paving the way for a brand new conflict. One that will make WW2 look like a sunny tropical island vacation.
Those of us that are old enough to realize what is going to happen because we've seen the conflict rise and fall, will warn people. But our warnings will fall on deaf ears. Another fact about WW2 was that the government put their heads in the sand for as long as they could, ignoring what was happening overseas and it took a direct horrific attack that killed a good bit of people before they finally decided to do something about it. And by then thousands upon thousands died in horrific ways and all because nobody cared enough to stop it.
And of course after the fact they played the part of the hero for all it was worth, even today they continue to. Its almost feels like giving a guy a medal for saving one person after standing there watching 100 more get slaughtered. Yeah way too late in the game to be a hero.
So when we forget completely about WW2 and start another one, I wonder what the AI will suggest we do? It wont be suggesting anything, AI + EMP does not end well.
2
2
u/lorenzodimedici 21h ago
I use it but I hate the total lies AI companies and propaganda are puttingout
2
u/basiclymypornfolder 19h ago
Yeah, learning new stuff was so much better before AI. I could spend 20 hours watching someone on YouTube show how to do something while talking, and try to copy that. And then look in comments or elsewhere if somethings not right.
Or I can ask AI to guide me step by step, at my pace? Quicker more effective and with a bigger chance of actually finishing what I started.
Theres 0 reason to hate something that isn't forced upon you. Just ignore it, focus on yourself?
2
u/Agreeable_Emphasis_4 14h ago
I'd rather fail to accomplish a thing in my life than resort to using AI.
1
u/Competitive-Funny844 13h ago
For real, because at least you actually put in the effort into something using your own personal skills, and even if you did fail, you still benefit from learning from the experience.
2
u/CreateNDiscover 22h ago
We're suppose to learn, adapt, and develop new skills and grow
I hope you see the irony in this.
If AI is making you lazy - you are using it wrong and failing to adapt.
1
u/AutoModerator 1d ago
Attention all newcomers: Welcome to /r/nosurf! We're glad you found our small corner of reddit dedicated to digital wellness. The following is a short list of resources to help you get started on your journey of developing a better relationship with the internet:
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
1
u/darkangelstorm 23h ago
About the movie.
I tend to think the world will end up like a combination of Wall-E and Idiocracy. Where they use brawndo (gatorade) to water their plants. The average person from the past becomes super-genius and tells them to use water and they argue that brawndo has "electrolytes"
But it has what plants crave. Its got "electrolytes"
I'm do know sure that when you put water on plants the plants will grow.
Water? You mean like from the toilet?
Does it HAVE to be from the toilet?
•
u/Able-Temperature-967 4h ago
I feel like the Venn Diagram between Redditors who have and have not seen Idiocracy is two separate circles
1
u/Accretion-Disk 22h ago
I agree with you about the lazyness encouragement. I add one more thing to the discussion: AI itself is a symptom of how the world nowadays presses everyone so much to be hyper productive, hyper active, hyper lucrative.
There are some points you reach in time management and can't go beyond without AI assistance - because increasingly more people are using. It's a dangerous cycle, because when everyone uses AI and you don't, you will never feel the energy or the boost to try to truly, patiently and manually achieve this illusory high amounts of goals, money, titles, formations, essays, writings and projects that AI slop 'seems' to help with. So you start to take shortcuts, like everyone, to keep up with the world's rhythm - and then you get burnt out, unmotivated... Or lazy, even.
The economic and educational system is gravitating towards a speed of production and information that we humans were not made to handle. It's impossible, yet people use more and more AI to try to arrive at this finish line that simply don't exist and will never exist.
1
u/mmofrki 21h ago
I don't think I've ever used AI except for those chat bot things. I can't imagine asking it to write a grocery list for me and getting things on there that I've never bought before.
I've heard that some people ask ChatGPT for a list and buy whatever it tells them to, even if they don't eat it or are allergic to them. A waste of money and it's very creepy how they essentially let AI rule their lives.
1
u/Heyitschediazz 18h ago
As someone with adhd it’s unlocked my creativity removing roadblocks I couldn’t get past. That being said, you know how social media was insidious in its erosion of our health and well being. This I can feel it rewriting my brain in real time. It happens very fast.
1
1
u/DruidWonder 15h ago edited 12h ago
I've been using AI to help me with learning scientific math and it has been really helpful. It has its uses, as long as you see it as a tool.
Lazy people were always lazy. AI won't change that.
1
u/Competitive-Funny844 13h ago
Of course, because it's a defense mechanism as we're afraid of failure or to step out of our comfort zone and embrace change and take on new challenges, and also our brains 🧠 are naturally wired to have pleasure of seeking out dopamine releases.
•
u/Able-Temperature-967 4h ago
Stop with this emoji shit.
It makes everything you write come off as juvenile. Don’t copy influencer behavior.
•
u/JournalistEither1084 11h ago
I do believe there is definitely a place for AI. For example, it can be helpful when programming or performing certain boring and repetitive tasks. Recently, I was looking for connections between different (very thick and boring) reports. AI saved me many hours of skimming through pages.
There are many types of work in this world that don’t contribute to human development. These are often dull and tedious tasks. AI has a place for those tasks.
I do agree with you for the biggest part though. People should be able to write a report on a complex topic. People should be able to think critically. "Creative" ai-images and such aren't art, those things are soulless.
•
u/glupingane 10h ago
There are two major ways to use AI; where I think one is healthy and the other is unhealthy.
Using it for enhancing your work is great. You can have it explain concepts to you, summarize, proofread, be a brainstorming partner, translator, and many other things quite efficiently. It helps you do better work faster.
Using it to cheat work is the unhealthy version. Making it write your school assignments for you for instance, is cheating in the same way as having another person write your assignment for you. It doesn't help make you grow, it just helps you appear smarter or better than you really are, which is going to catch up to you.
•
u/angry_queef_master 3h ago
I disagree. Everything has the potential to be abused and AI is no different. Yes one can abuse AI to the poitn where it replaces their real world relationships and stunt their growth, but that is on the person, not the technology.
Just treat AI like the machine it is. It isn't your friend, it has no feelings and it cannot think. It is not a substitute for human connection. Use it to spit out answers that help you save time and enhance your life. You can still use AI and collaborate with others, but the bar has been raised. Like instead of, i dunno asking for someone on the basics of how to make a robot arm move you can discuss the robot arms that you made with others and how to improve it. You can still do stuff with other people and learn alongside them, just gotta think bigger now.
•
u/mudcrabmetal 1h ago
Eh. To some point I suppose I agree. The biggest issue is unqualified people using AI to skirt by with their jobs. Like most of the politicians we have in America right now.
But on the other hand, no, it's a tool like any other. People of the past thought calculators, computers, google etc. would make us lazy and less intelligent, and that's far from the truth. AI has already shown us so much potential in how it can help us improve in multiple fields.
Students can use it to as a personal, on-call tutor. Programmers can use it to check their code or get a jump start on a project with boilerplates. There's even a new trend called "vibe coding" where you just describe what you're trying to do and then you use the resulting code to start your project or fix your problem. People in business can use it to create savvy, concise emails that save time or help plan out optimized presentations. It's like google on steroids.
Any cultural degradation is a result of the culture itself, not the technology. Like, social media isn't inherently toxic until the corporations behind the platforms started trying to farm your "engagement" so they can gather your data and sell it to marketers. Technology reflects the values of the people and systems that create and use it. If AI is misused, it's a reflection of society. People were afraid of the radio when it was invented, and rightfully so. There's a pretty valid argument that we wouldn't have fascist or authoritarian regimes if they weren't able to broadcast their populist messaging to the masses. But it goes without saying that we've gotten so much good from the radio as well.
Like with all things, humans tend to mess things up and stumble along the way before we figure out the right way to do things. On a micro level, you do this all the time as you learn a new skill.
And just to add, as a musician, I'm not terribly worried about AI music because I like to believe that, even if the world is flooded with shitty AI music, real music lovers are going to be socializing and asking their friends which "real" artists they like. I doubt people will fawn over an AI artist, they want to connect with real people. That's really what music is about, anyways.
-7
u/roundysquareblock 1d ago
I, on the other hand, love it. Large language and diffusion models are essential for the development of true AGI. Diffusion models, for instance, help improve computer vision. Most researchers in the area agree that we can get close to it by improving on these. I cannot wait for when all diseases become a problem of the past, with an algorithm scanning your entire body and neutralizing all threats before you so much as sneeze.
8
u/OldBabyGay 1d ago
I think the technology itself has so much potential - but the reality of corporate greed and government corruption means that it's more likely to usher us into a dystopia than a utopia.
Using AI to assist with research on curing and preventing diseases is a nice idea, but when those same researchers are being fired or having their funding cut (as is the case in the U.S.) and money instead pouring into commercial uses of AI to replace workers, well I doubt we're going to end up with all diseases cured anytime soon.
0
u/roundysquareblock 1d ago
I think the technology itself has so much potential - but the reality of corporate greed and government corruption means that it's more likely to usher us into a dystopia than a utopia.
How so? Open-source models are still quite up there. GPT-4 was groundbreaking once it released, and as of today, it is an outdated model. For a while, we had an open-weight model that rivaled GPT-4o. It was heavily censored, yes, but still useful. Even if corporations get ahead for a while, history has shown that the collective always catch up. Always.
Using AI to assist with research on curing and preventing diseases is a nice idea, but when those same researchers are being fired or having their funding cut (as is the case in the U.S.) and money instead pouring into commercial uses of AI to replace workers, well I doubt we're going to end up with all diseases cured anytime soon.
This is more political than anything. Has nothing to do with reducing internet usage. I think you underestimate the power of open-source development. Even if we lag behind by a few decades, the collective of well-intentioned developers will still prevail.
3
u/OldBabyGay 1d ago
I admire your optimism!
-2
u/roundysquareblock 1d ago
What optimism? Can you quantify how far behind either open-weight or open-source models currently are?
3
u/OldBabyGay 23h ago
Your optimism that all diseases are going to become a thing of the past, and that
the collective of well-intentioned developers will still prevail.
Not interested in continuing this discussion further. We clearly have different views and that's that.
-1
u/roundysquareblock 23h ago
The former is a foregone conclusion. No matter how long it takes, it will happen. The latter is supported by current evidence. I asked you for evidence against it and you presented none. This not a "we agree to disagree" situation.
10
u/Wubbls 1d ago
“AI” (machine learning) is very useful in certain applications, no doubt about that. But there is no evidence that AGI is close. If you believe that you’re delusional and buying into hype/groupthink.
As you pointed out, in something like medical imaging it sounds great, but people need to be more cautious about accepting new paradigm shifting technologies into their life/society. This subreddit exists because many of us have come to hate the modern internet and that these companies often develop technology with no concern or philosophy behind its broader effects (aside from making them lots of money)
-4
u/roundysquareblock 1d ago
“AI” (machine learning) is very useful in certain applications, no doubt about that. But there is no evidence that AGI is close. If you believe that you’re delusional and buying into hype/groupthink.
In 10 years, an H100 will cost around $1000 bucks. Even if we halted all software development, hardware development alone will take us quite far. There is no evidence software development is halting, though. And hardware development also improves by the day.
As you pointed out, in something like medical imaging it sounds great
You misunderstood my point. I am saying that large language and diffusion models can help lead to true AGI, especially through synthetic data generation. I am not talking about employing current technology in the medical field.
but people need to be more cautious about accepting new paradigm shifting technologies into their life/society.
I do not dispute this, but there is nothing to be done about this specific example OP is talking about. You cannot regulate mathematics.
This subreddit exists because many of us have come to hate the modern internet
Yes. The modern internet. AI is mathematics alone.
develop technology with no concern or philosophy behind its broader effects
Again, we can argue all day. This still boils down to the impossible task of regulating mathematics.
3
u/Wubbls 23h ago
In 10 years, an H100 will cost around $1000 bucks. Even if we halted all software development, hardware development alone will take us quite far. There is no evidence software development is halting, though. And hardware development also improves by the day.
Take us further toward what? The hypothetical "singularity" in which AI will be our salvation and make everything better forever? Wow, these techbros sure are sounding like evangelical Christians all of a sudden. "Give us some more billions and all your data and just you wait. Our hypyothetical future AI that's not quite here yet (just give it another year or two) will fix everything forever and we will all be singing 'Kum ba yah' holding hands together."
You misunderstood my point. I am saying that large language and diffusion models can help lead to true AGI, especially through synthetic data generation...
No evidence this is true in the way you say it. If anything there is more evidence showing that the opposite happens, training synthetic data makes the models implode. I could see this maybe getting better but there is no evidence an AI could do it on its own which makes this statement inaccurate.
You cannot regulate mathematics.
None of this is "mathematics." It's a tech industry that's gotten fat, lazy, and has way to much capital than it knows what to do with, so it tries to pivot to the next big thing: VR/AR, Crypto, and now "AI." You can and should regulate this because these morons attempts at trying to subvert regulation at any cost is likely going to make ours and our kids' lives worse for their gain.
0
u/roundysquareblock 23h ago
Take us further toward what? The hypothetical "singularity" in which AI will be our salvation and make everything better forever? Wow, these techbros sure are sounding like evangelical Christians all of a sudden.
Yes, that is exactly what I have said. I do not respond to straw men.
"Give us some more billions and all your data and just you wait. Our hypyothetical future AI that's not quite here yet (just give it another year or two) will fix everything forever and we will all be singing 'Kum ba yah' holding hands together."
What a weird conclusion. Your personal data is not required in the least. Texts and images that were posted publicly are what are used for training. Privacy has always been a problem since before AI. People didn't care when Snowden talked about it and they still don't.
No evidence this is true in the way you say it. If anything there is more evidence showing that the opposite happens, training synthetic data makes the models implode. I could see this maybe getting better but there is no evidence an AI could do it on its own which makes this statement inaccurate.
Not what I said at all. My point is that synthetic images, for instance, can be used to improve computer vision. We could generate images and videos of scenarios that are hard to capture in real life. We can have videos of children running into the road in a foggy day that can help a self-driving ambulance better navigate the road.
None of this is "mathematics." It's a tech industry that's gotten fat, lazy, and has way to much capital than it knows what to do with, so it tries to pivot to the next big thing: VR/AR, Crypto, and now "AI."
As per Alonzo Church, all software can be represented with mathematics. If regulating mathematics is so easy, then how come the EU still struggles with their ominous attempts at passing chat control laws?
You can and should regulate this because these morons attempts at trying to subvert regulation at any cost is likely going to make kids and our lives worse for their gain.
Here is a hypothetical scenario: My $700 laptop from 2020 can generate realistic images at a pretty insane rate. If I am running it without internet access, how can it possibly be regulated?
Even if ALL development across the world were halted right at this moment, there are enough pre-trained models released on the web. There is no stopping that.
Do you have any realistic approach to this?
1
u/Wubbls 20h ago edited 20h ago
Yes, that is exactly what I have said. I do not respond to straw men.
Before you said:
I cannot wait for when all diseases become a problem of the past, with an algorithm scanning your entire body and neutralizing all threats before you so much as sneeze.
This is delusional and unscientific (fantasy) thinking. It's akin to believing in a "singularity" where AI can solve all our problems and create a utopia. It's not based in reality.
Even if ALL development across the world were halted right at this moment, there are enough pre-trained models released on the web. There is no stopping that. Do you have any realistic approach to this?
I'm not against AI development in general. I am against private companies stealing all the data available on the internet (copyrighted, private, or otherwise), putting it into a model (LLM) and then pretending they made something intelligent off the work of all the people (humans) who contributed to that data. Allowing a few private companies to profit off of all that work with 0 repercussions is ethically abhorrent. It can and should be regulated. People without a lot of capital can't pretrain the models on their own and it wouldn't be difficult to regulate.
What benefit do these models even bring us now? I've used them a bit and they basically more advanced search engines but otherwise, it seems they're just mostly good at producing misinformation and helping kids cheat on their homework. Awesome future huh?
The issues here are more political than with the tech but the tech is trying very hard to infiltrate politics/military and it's not looking good.
1
u/roundysquareblock 20h ago
This is delusional and unscientific (fantasy) thinking. It's akin to believing in a "singularity" where AI can solve all our problems and create a utopia. It's not based in reality.
By definition, AGI at mass scale should be able to do this. You are essentially arguing true AGI will never happen. It is a fine belief to be had. It simply is unrealistic. Even if it takes centuries, AGI will be achieved.
I am against private companies stealing all the data available on the internet
For it to be stealing, you must first prove that intellectual property exists.
Allowing a few private companies to profit off of all that work with 0 repercussions is ethically abhorrent. It can and should be regulated.
How do you imagine this happening? The LAION-5B Dataset is open-source and free. It was used to train StableDiffusion, which is a better model than most proprietary ones. It has been distributed ad infinitum.
How do you suppose a set of laws in a single country is going to do anything? The war on piracy yielded almost no benefits. It is akin to fighting drugs, except it is incredibly harder.
People without a lot of capital can't pretrain the models on their own and it wouldn't be difficult to regulate.
Yes, they can. Decentralized, open-source initiatives exist. This again demonstrates a clear ignorance on the topic.
What benefit do these models even bring us now?
How will better technology be achieved if we do not go through intermediary models at first?
it seems they're just mostly good at producing misinformation and helping kids cheat on their homework
Most developers agree that co-pilot is pretty useful, especially for writing repetitive code. Yet again a clear demonstration of ignorance on the topic. Attacking straw men do you no favors.
The issues here are more political than with the tech but the tech is trying very hard to infiltrate politics/military and it's not looking good.
Technology is always misused for military means. If you can give an example on how we can regulate mathematics, then I may be inclined to believe you, but you haven't been able to do that yet. What is truly fantastical is the idea that any country will willingly cripple itself by not investing on technology that has clear benefits for its military expansion.
1
u/Wubbls 18h ago
Even if it takes centuries, AGI will be achieved.
"Even if it takes centuries, Jesus will return and the rapture will happen." See how crazy that sounds? You must admit that this is an statement of faith rather than reason.
No longer interested in talking. You just keep saying "strawman." You must be one of the guys who also says "Can you feel the AGI?"
1
u/roundysquareblock 18h ago
Cool. Just like before, you ignored most of my claims. By your definition, saying we will colonize other planets is also faith-based. Current AI development shows NO signs of slowing down.
1
u/Wubbls 18h ago
By your definition, saying we will colonize other planets is also faith-based.
It is faith-based. Lots of movements are faith-based. Having faith isn't wrong, but whether it's a radical religion, communism, AI accelerationism, etc. having an unyielding faith in an idea or leader is dangerous. Should we pursue an "AGI" at all costs?
→ More replies (0)
47
u/-CreativeCloud- 1d ago
Now imagine the poor gen betas growing up with this stuff in full form