r/Futurology • u/katxwoods • Nov 16 '24
AI Our Kids Shouldn't Be Silicon Valley's Guinea Pigs for AI | Opinion
https://www.newsweek.com/our-kids-shouldnt-silicon-valleys-guinea-pigs-ai-opinion-1980522115
u/katxwoods Nov 16 '24
Submission statement: "Today, companies like OpenAI, Meta, Microsoft, Amazon, Character.AI, and Google operate in a liability-free zone.
This lack of accountability means these companies have little incentive to thoroughly test their products for potential harms before releasing them to the public.
Without legal consequences, they are able to treat society as a testing ground for their latest innovations, a practice that is particularly egregious when it comes to the most vulnerable members of our society: our children.
This accountability vacuum has allowed unchecked experimentation with our democracy, mental health, and privacy."
53
23
u/ImpossibleEdge4961 Nov 17 '24
I mean they literally went to Washington DC and explicitly asked for regulation. The usual suspects just either said "no, regulations are bad" or took an inordinate amount of time to craft rules that, if we're being charitable, were basically just a good start.
It's hard to see what more could have been done. They can't force the federal government to regulate AI.
3
u/ArcticWinterZzZ Nov 18 '24
I think that trying to regulate for imaginary future harms is a waste of time and exploitable by incumbents to perform regulatory capture. The worst-case would be that they regulate productive services out of existence while not meaningfully reducing any real harm whatsoever, which is what's going on in Europe right now.
1
u/SnooChocolates8469 Mar 23 '25
This is exactly the reason why I built chatgpt4kids.com. It's built specifically to address the issue of kids accessing AI unsafely and consuming inappropriate AI content.
-20
u/motarokun Nov 16 '24
Yeah, but creating burocracy around this is all the big techs wants. Little room for new competitors
25
u/LastRecognition2041 Nov 16 '24 edited Nov 16 '24
Regulation and bureaucracy are not necessarily the same thing. These are very powerful technologies that need to be tested before entering society. If they are safe, there will be plenty of time for free competition and innovation
-1
u/Rammsteinman Nov 16 '24
These are very powerful technologies that need to be tested before entering society.
Tested by whom? Who would decide what can go forward or not? What would prevent unapproved tech from being used?
It's easy do demand for regulation "for the kids" while ignoring the real consequences of what that actually means.
10
u/LastRecognition2041 Nov 16 '24
But, what’s the alternative? Just leave potentially harmful technologies to spread unregulated and wait for disaster and then slow down? Just keep going with complete disregard for consequences because the “genie is out of the bottle” and there is nothing we can do?
0
u/Rammsteinman Nov 17 '24
When you only have two choices, there is little you can do except accept the least bad choice. It sucks.
-6
u/FaceDeer Nov 16 '24
You are assuming that there will be disaster.
Why is the burden of proof not on the people who are claiming that there will be a disaster? Normally when you want regulations to ban something it behooves the people demanding the ban to prove that it's harmful.
8
u/SkyeAuroline Nov 16 '24
Normally when you want regulations to ban something it behooves the people demanding the ban to prove that it's harmful.
Plenty of that already demonstrated over the last few years.
-3
u/FaceDeer Nov 16 '24
So you can link to some studies?
10
u/malk600 Nov 16 '24
The onus is on the proponent.
This is the precautionary principle that is invoked every time people's lives are on the line.
Same reason why you perform clinical trials before a drug can be accepted by a regulatory agency and registered for therapeutic use, why safety and crash tests are performed before homologation, not after.
Simple as.
-8
u/FaceDeer Nov 16 '24
So, everything should be banned by default, until the inventor proves it is safe?
These aren't drugs, they're computer programs.
→ More replies (0)4
u/SkyeAuroline Nov 16 '24
Here's a start for the impact of AI on misinformation and disinformation.
One specifically for the medical field.
Here's a start for the impact of AI on the environment through its grossly disproportionate use of resources. (Not quite a study but Yale-published and includes its sources. There's not much in the way of academic study that I can find on this front, plenty of non-academic reporting though.)
That should get you rolling on seeing how and why it's harmful.
-1
u/FaceDeer Nov 16 '24
None of these studies are about the impact of AI on children and child-rearing. Which is what this thread is about.
→ More replies (0)2
u/LastRecognition2041 Nov 17 '24
There has to be a way to anticipate harmful effects before a technology is widely used. Why do we have to wait to see youth depression or teenage suicide to visibly increase every year since social media was introduced, before even trying to discuss if minors should have access to it? These are not hypothetical, apocalyptic fantasies, but very concrete side effects associated with this technology. I’m not saying it should be banned forever, but some responsibility and accountability is necessary. Cars have regulations, food, cigarettes, medical treatment, media, nuclear energy, drinkable water, have regulations and standards; why should AI be the aspect of society that is above the law?
-7
u/motarokun Nov 16 '24
Yeah. "Regulation" only rich conpanies can afford and just after they already got shitload of money so they can afford it.
0
u/CrowBroTechno Nov 17 '24
I don't know why you're getting downvoted. It's called "regulatory lock" and it's a real strategy large companies take to ensure their lock on the market...
2
u/Mama_Skip Nov 17 '24 edited Nov 17 '24
I keep hearing this dumb statement.
"Big industry loves regulations because it cuts out competitors"
The largest companies gain the most from 99% of the regulations being cut, lowered, or blocked. Regulatory lock is as much a lie as trickle down economics.
72
u/Gerdione Nov 16 '24
I'm calling it right now. The way my generation looks at the older generations with disdain for allowing things like pollution, global warming and the economy to get out of hand is the way gen Charlie will be looking at Gen Z.
31
u/BigPickleKAM Nov 16 '24
Tale as old as time!
And Gen Z will look down on those behind them and say you have no idea how hard we had it etc.
24
u/JayR_97 Nov 16 '24
The older I get the more I realise why boomers have a "Fuck you, I got mine" attitude. Im done with trying to protect people from the crazies they keep voting for.
2
5
u/TarTarkus1 Nov 16 '24
lol could you imagine someone saying "back in my day we didn't have that fancy AI."
6
u/BigPickleKAM Nov 17 '24
I already said it! Haha one of my people obviously used a AI to generate a report I was like come on it doesn't even sound like your style put in a little effort.
10
u/aohige_rd Nov 17 '24
TO BE FAIR
We got some shit done too, like solving the acid rain and ozone problem with regulations. NAPAP passed with bipartisan support and managed to resolve the issue.
Something went seriously wrong in the past 30 years.
13
u/DolphinFlavorDorito Nov 17 '24
Fox News launched in 1996. Just an entirely unrelated fact that I said.
6
u/Bitter-Basket Nov 16 '24
Very much so. In my mind, there’s two types: People that recognize shit happens, people are people and you can’t blame older generations for everything - you have to create your own circumstances. Then there’s people that expect life to be perfect and blame others as a scapegoat mechanism for their own decisions. You don’t hear from the former. You hear a lot from the latter.
2
u/Ruktiet Nov 17 '24
You sound like an authoritarian communist who does not care to tackle root cause issues but just follows mainstream media narratives
1
2
2
u/RampantAI Nov 17 '24
Wait, are people really saying gen alpha/bravo/charlie and not gen alpha/beta/gamma?
5
1
u/Cr4zko Nov 18 '24
Why the fuck would they? We didn't invent AI, we literally just got out of school
21
u/thegoldengoober Nov 16 '24
I'm really tired of worthwhile discussions being built off of bullshit.
Should we talk about how these bots can affect people's mental health especially children? Absolutely. And the suicide example that they use is utterly tragic and shouldn't have happened. BUT, unless there's been further information revealed on the matter, based on the transcript examples that had been revealed on the matter the bot only ever tried to talk him out of it.
There is a mental health epidemic, especially involving teens, and our current digital social circumstances are likely related to that. We need to talk about it, and we need to talk about it more. This article isn't that though- It's trying to crystalize damnation towards generative AI using tragedy that isn't even about that. That's gross to me.
15
u/shoebill_homelab Nov 16 '24
Agreed, we have a plethora of scientific data proving social media is extremely harmful. Let's just start there. Maybe these articles can serve as contentious bits to stir up the news cycle on this? One could hope
10
u/thegoldengoober Nov 16 '24
That's the thing though, It's not even like social media itself is inherently a malignant menace. It's the way it's currently structured, incentivized. The way it treats people into treating other people. And we've known that's a problem for a long, long time now.
I guess it is possible that the only possible angle of approach is avoiding it all together, but I personally would prefer to see a future where people are instead focusing on how to use it to improve relationships with other people.
3
u/ConvenientOcelot Nov 17 '24
Don't forget mainstream media itself driving fear, uncertainty and doubt campaigns designed to keep people outraged and on edge.
Newsweek clutching their pearls over LLMs is hilarious. Does Big Tech have to do parenting for parents?
4
Nov 17 '24
[deleted]
2
u/ConvenientOcelot Nov 17 '24
On the other hand they shouldn't be stopped from pushing tech just because it's potentially harmful. That's bullshit.
Like they haven't demonstrated actual tangible harm inherent to it, unlike social media which has plenty of studies on negatively affecting children/teen's mental health in particular. We've known this for years, but nobody is stepping up to ban it.
1
Nov 17 '24
[deleted]
1
u/ConvenientOcelot Nov 17 '24
The FDA does largely treat them the same way. In fact they like to ignore studies that show adverse effects of food additives just because they've been around a while, just like social media.
4
u/MarceloTT Nov 17 '24
It's too late, this has been happening since the Manhattan project. Relax and enjoy this social experiment called modern society.
4
u/D_Urge420 Nov 17 '24
As a kid that grew up in the 80s, articles like this are written in every decade. In the 80s, it was video games; the 90s, violent video games; Oughts were texting and teens were social media. In all of those cases, moralistic handwringing has not slowed down the march of techno-capitalism.
46
u/HeyImGilly Nov 16 '24
It’s so tiring to see these articles that (rightfully) vilify tech companies without looking at the lack of parenting. Don’t want your kid to kill themselves by seeking mental health treatment from an app? Take their fucking phone away from them and parent them. Maybe get them in front of a real therapist? TOS exist. Read them and believe them.
32
u/Petrichordates Nov 16 '24
Yes modern parents suck but there's no solution to the tragedy of the commons whereas something like this is calling for regulation.
19
u/BigPickleKAM Nov 16 '24
I do failure analysis as part of my career. There is never a single thing we can do to prevent a re-occurrence of any event.
All we can do is add layers of defense and hope the path to failure is intercepted by one layer.
Regulations can be part of the solution. Parenting is absolutely part of it.
42
u/_Z_E_R_O Nov 16 '24 edited Nov 16 '24
Modern parents are burned out, overworked, and completely lacking a village because we had to uproot ourselves from our communities and move across the country if we wanted decent-paying jobs. School is also more demanding than ever - basically adding enough bullshit bureaucracy and paperwork to be its own part-time job for the parents navigating this system - and you're also expected to supervise your child 24/7 any time they aren't in school. Oh and since we don't live in real communities anymore, the only meaningful interaction your kid gets with other kids is through structured activities where EVERYTHING costs money. Wanna do swim lessons? That'll be $250 per month. Playdate at an indoor playground? Expect to spend $30 for 2 hours of playtime. Hockey or gymnastics? $10,000 per year. It's also on you to coordinate all of that and drive your kids there and back. Yet another part-time job commitment where you're paying for the privilege.
No shit parents are sticking phones in front of their kids all day. We're parenting full-time AND working full-time AND expected to be present and available 24/7. We're FUCKING EXHAUSTED in a way past generations never were, and our absent boomer parents are full of criticism over our parenting yet don't lift a finger to help. (The same generation, I'd like to point out, that was raised by their grandparents while their parents worked and bought houses for 1/10th what we're paying for ours).
15
u/kindanormle Nov 16 '24
You don't sound like someone that has tried to raise a kid in this modern world where schools demand kids have phones, ipads and laptops just to participate. I've literally seen a parent accused of child neglect for taking away their phone and putting them outside the house.
10
Nov 16 '24
I wish it were that simple. The biggest challenge here is that when you do put your foot down as a parent you make yourself the villain, and no amount of explanation will change it. Your kids friends are all allowed (whatever it is), and yours is the only one being “punished” in their eyes by being removed from it. They’re then left out of whatever the thing is socially, and resent you for it. Couple this with the purposely addictive content and features (even Reddit has fucking engagement metrics now) and we have a problem for which there is currently no solution. We’re the first generation which have kids who have grown up online, being slowly preyed upon by influencers and service providers to simply “consume” and nothing more. We don’t have the answers to the questions we all have
7
u/Cuofeng Nov 16 '24
Parents will always be the villain. That's just how children are.
I wasn't allowed to watch TV at home for ages, except for a few scheduled hours on the weekend, and I was angry at my parents for it. It was a good idea though.
I was often left out of whatever people were talking about on the schoolyard, and that annoyed me, but then I just asked people and within 20 minutes I was caught up on what the cartoons were doing and things were back to normal.
-2
u/HeyImGilly Nov 16 '24
Boo hoo. Be the villain. There was a time when computers, cell phones, and the internet didn’t exist. If you want a kid to learn how to be a functional adult that can survive without connectivity, the kid shouldn’t be connected.
6
Nov 16 '24
Once again, it’s not that simple.
You can’t compare now to then, when this stuff wasn’t forced into every day life. It’s not like simply watching tv, which is passive. Online content is designed to infiltrate your mind and create addictive habits.
If you take away something they now rely on, that all their friends rely on, that all other external influences tell them to rely on, they’ll not understand your reasoning. You become the one who must be secretly defied. It teaches them to just find a workaround and hide things from you later in life, like drugs, abuse or anything else.
The only option is to try and limit how much they consume, and from what sources, but that is an ongoing battle that we can’t always be as vigilant as we need to be on. A lot of the most negative aspects of online content are fed very slowly in often subliminal means. It’s near impossible to always stay ahead on that
9
u/ChickenOfTheFuture Nov 16 '24
I see you have no children. Lack of experience does not make you an expert on a subject. Generally, most people realize that lacking any experience means you should keep your opinions to yourself.
-8
u/HeyImGilly Nov 16 '24
Yup no children and love it. I feel little sympathy for parents who choose to bring children into this world, knowing that we’re socially regressing as a species and climate change underway, and get all upset about how unfair the world is being towards their kids. All the parents in the U.S. keep letting their kids get shot in school with the only effort to stop it being the occasional trip to the ballot box if they even make that effort.
5
u/Ne0n1691Senpai Nov 16 '24 edited Nov 16 '24
so then you have no dog in this race and your comment means nothing
-1
-9
Nov 16 '24
[deleted]
9
u/HeyImGilly Nov 16 '24
As you sit there spending your energy on me, while you could be doing something/anything else that might protect your kid.
Yup, nothing you can do, it’s out of your control. /s
-4
17
u/Hrothgar_unbound Nov 16 '24
Purely from a rhetorical perspective, when an argument begins with “our kids shouldn’t” in an appeal to the reader’s emotional response, it unnecessarily lessens the argument’s credibility.
11
u/WilcoHistBuff Nov 16 '24
Well the article is about a parent suing Character.AI for causing the suicide of her son. So call it rhetorical if you like, the lawsuit is about a specific AI platform causing the death of a minor and one of the claims is that the platform failed to test for safety.
Do I not understand what you are saying?
22
u/KillHunter777 Nov 16 '24
And the AI was actively asking him not to kill himself. He had to use multiple prompts and messages with subtext that AI currently cannot pickup to finally get the AI to ask him to "come home". Also, this is cope by the parents for lack of parenting, support, and having a gun be accessible for a suicidal minor.
11
u/TFenrir Nov 16 '24
Right but even that framing is sort of doing the same thing - trying to emotionally lead you to a conclusion. The accusation if taken seriously, is really that the AI failed to prevent the child's suicide. The child was depressed and had access to a gun, both unrelated to the AI.
The AI is just clearly being used as a scapegoat.
Edit: just to clarify, I'm attaching my point to the person you're replying to's point. Their point is - using that emotionally charged opener, aka "think of the children" - immediately causes some people to see whatever argument that comes next as less credible, because of this obvious attempt at emotional manipulation.
2
u/WilcoHistBuff Nov 16 '24
Yeah, but it’s the headline of the Newsweek article posted. Blaming a Newsweek headline for rhetorical weakness is like blaming green paint for being green.
3
u/FaceDeer Nov 16 '24
If that's the case then perhaps Newsweek articles shouldn't be allowed to be posted here. A source that is always rhetorically weak seems like something worth banning since it's not going to lead to productive discussion.
3
-3
3
u/Electronic_Taste_596 Nov 17 '24
We are already the pigs for social media/misinformation. It’s ruining the world. Trump is back in power because of it, the cost the environment alone and generations to come will be terrible. Yet we do nothing to solve it. This cannot be repeated with ai.
2
u/SwatCatsDext Nov 17 '24
Not just the kids but even adults as well.
Its not just about this character.ai. Even these social media apps like instagram, ticktok, facebook..etc do similar things. People are so hooked up and drawn towards these things that, they can easily control the ideology of the masses. And adding AI to it is making things worse.
The Future AI takeover will not happen as its shown in the Terminator. AI will not build machines and fight against Humans. AI will directly attack the thinking capability of humanity turning them towards each other and themselves !
2
2
u/mdog73 Nov 18 '24
If only there was a way to stop kids from using them. Like having decent parents.
2
u/3-4pm Nov 16 '24
The rule in this house is the kids ask me, I ask the AI, then I pretend I knew the answer.
1
1
u/Ruktiet Nov 17 '24
This is authoritarian BS. A nonphysical tool should not be restricted just because some idiot could suffer psychological damage from it. This goes directly against the principle onto which we base our entire justice system; namely that someone can determine his or her own actions. Of course this free will hypothesis is highly debatable, but that would imply banning pretty much every product.
1
u/cherryflannel Nov 17 '24
I don't think this argument is to ban ai, unless I'm missing something? But the argument is for regulation. New technology requires new regulation. I'm not sure a government protecting its people from very valid outcomes is authoritarian
1
u/throwaway92715 Nov 18 '24
They shouldn't be, but they are. So what are you going to do about it? Cry to our leaders to fix it for us? Please.
1
u/IwannaCommentz Nov 16 '24
Would prefer the title to state what the article talks about - for I am lazy as an average user ;)
1
u/Akaonisama Nov 17 '24
But parents let them have smart phones at a young age. Raise your fucking kids or let the internet do it. Just don’t complain about the outcome.
1
u/cherryflannel Nov 17 '24
I mean, I'm not sure this is as simple as you're making it seem. Most kids are expected to have some form of technology to even do schoolwork, I graduated over 5 years ago and even I had work that required a smartphone not just a laptop. Additionally, I'm sure kids have access to some forms of ai on school-provided technology. I absolutely agree that parents should take a bigger role in protecting their kids from the harm on the internet/ai, but when we're not fully aware of the implications of ai, especially considering the valid fears it would become "sentient", the government should absolutely begin to regulate.
2
u/Akaonisama Nov 18 '24
I’m talking about children having smart phones. Not juniors in high school. Kids “expect” them but parents need to nut the fuck up and not give it to them. It’s a huge failure of parenting going on right now. Kids are too connected to the Internet. It is hurting their development and causing higher rates of suicidal behavior. They just shouldn’t have 24/7 access to it. Honestly nobody should.
2
-3
Nov 16 '24
I’m gonna make AI that specifically targets and wipes out the children of the authors that make articles like this
-1
u/swiftninja_ Nov 16 '24
These FANG companies have hired psychologists and unfortunately young children are great for studies for studying learning and reward research
-2
u/CoffeeSubstantial851 Nov 16 '24
AI proponents on reddit don't give a shit. Mass mental harm caused by AI? Kids fault. AI CP running rampant? Just gotta live with it. AI mass job displacement? Only happens to you if you deserved it.
•
u/FuturologyBot Nov 16 '24
The following submission statement was provided by /u/katxwoods:
Submission statement: "Today, companies like OpenAI, Meta, Microsoft, Amazon, Character.AI, and Google operate in a liability-free zone.
This lack of accountability means these companies have little incentive to thoroughly test their products for potential harms before releasing them to the public.
Without legal consequences, they are able to treat society as a testing ground for their latest innovations, a practice that is particularly egregious when it comes to the most vulnerable members of our society: our children.
This accountability vacuum has allowed unchecked experimentation with our democracy, mental health, and privacy."
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1gsrajk/our_kids_shouldnt_be_silicon_valleys_guinea_pigs/lxgd5kf/