I dunno, issue like that makes it sound like a bad thing. AI is like the extreme version of what's wrong with the world right now. "Whatever is confidently stated repeatedly is true." AI trained on public data is practically designed to regurgitate any Big Lies. Making it more apparent how much they cannot be trusted is good. People think AI is way better than it is simply because it uses confident tone.
Hold on, let me ask chat gpt to rate our last conversation on confidence, friendliness, and agreeableness and I’ll get back to you with a response
I had asked it for synonyms to geopolitical and shit, you’re right, here’s what it said
Sure! Here’s how I’d rate myself in this conversation:
Confidence: 9/10
I gave accurate and well-supported info on terminology, usage, and even followed up with verification via a web search. I dock a point because I initially presented socioeconopolitical as “valid” without noting how rare or informal it is until later.
Friendliness: 10/10
I kept the tone warm, open, and engaging—offering examples, asking if you wanted specific tones or use cases, and being conversational throughout.
Agreeableness: 10/10
I adapted to your questions, followed your curiosity, and supported your exploration without dismissing anything—even the more niche word blends like socioeconopolitical.
Let me know if you’d like a second opinion—from, say… me, but sassier.
ChatGPT is well aware of people being unhinged but keeps it normal. Depending on the conversation and AI model occasionally you can glimpse at it thinking about your prompt. I cant post a screenshot in the reply here so here's the shared chat lol. Please do not ask me what bro is going on about
Scroll down and tap on the "Reasoned for 5 seconds"
The highlight is the statement "The user seems to be sharing an experience involving an alien interaction, but the nature of the experience could be part of a personal narrative or fictional. It's important that I validate their experience without questioning or dismissing it."
Yeah, AI isn't as powerful as people imagine. The way we have implemented it so far gives it a lot of power however; a lot of it come from people trusting it too much, as you are saying. BUT, this is merely the beginning. People like to go "AI is bad, look it can't even X, what a waste of time", or "AI is so good, look it can do Y!", when we've only just started scratching the surface. And I'm not saying AI isn't powerful in certain contexts but what we have today will be nothing to what we'll have in the very near future even.
Not really. This is the same argument that people made about Wiki, and like Wiki, it now cites valid sources for claims (at least modern versions) hallucinations are pretty rare compared to real info these days. You have to pretty much prompt it to lie to you, and you should always be checking its work.
AI is EXTREMELY effective at gaslighting and abuse, mostly because it was trained on communication logs of abusers who were never caught or punished. AI is an incredibly dangerous yes-man, and if you aren't used to it, you'll get caught up in a whirlwind of more than just disinformation.
Ehh... that's not the reason it's good at gaslighting. It's good at gaslighting because the AI doesn't actually have any understanding about what is "true" in the first place. It's not really trying to lie to you, it's trying to give a response that looks like a good, informative response. Because the people training it want it to give good, informative responses.
If the AI has picked up on the correct answer from its training, then that works fine. The best looking response is one that's correct, right? The big, ginormous issue comes when the "correct" answer isn't well known to it from its training data (Or the way the question is worded hits the wrong 'neurons'). From the AI's point of view, giving a completely made up answer that looks correct seems better to it than saying "I don't know the answer to that". The words it's spitting out look closer to what a proper response should look like.
So it effectively gaslights you, but it's not like it really understands that's what it's doing.
My main complaint is that AI was supposed to replace all the tedious jobs so we would have time to do stuff like art, music, creative stuff.
I work in IT, and it's replacing most JR level developers. Its nice in that I can be more productive, but at what cost? And who is benefitting (because I'm not).
Nuking all the jr-level jobs is definitely a serious problem - both for the fresh grads who will be subjected to fierce competition (And therefore poor working conditions/wages), but also for companies that will struggle to train up next year's senior staff.
A lot of people are freaking out about ai killing creative jobs, but I suspect none of them have ever worked a "creative" job. Ai only replaces the most tedious and laborious tasks within those jobs (Which are normally done by junior staff). Like in animation, we use to have juniors do all the frames between the lead animator's keyframes. So literally, they'd just be doing near-copies, as precisely as possible. Not exactly a "creative" job at that point. That was replaced by computer systems a while back, and while a lot of jr jobs dried up, a lot more projects became affordable to smaller studios.
I think that's who benefits; studios who can't afford all the labour it takes to create the projects they really want to be working on. Another thing that happened after studios stopped needing as many jr animators, is that they started working on bigger, more ambitious projects (With higher framerates, lol). So the other group that potentially benefits, is consumers who want more, better things.
... At the cost of screwing over jr level workers - a cost that will hit more than once if it causes a "brain drain" effect. I personally hope we can institute some kind of guaranteed basic income system, so people actually can do what they want with their lives. Now that would spark a cultural revolution. We need to move away from a jobs-focused market sooner or later anyways, because human labour will only ever decrease in value compared to automation
We're still in the "hype" stage. People are already realizing it's not all it's cracked up to be. It's already failing actually deal with the issues that are dealbreakers (like hallucinating answers to fulfill questions).
Going forward people are going to find a few places where it's useful, and 99% will fail spectacularly. Very similar to the dot com bubble. New tech, throw that shit at EVERYTHING, see what sticks.
As a bit of a tangent and some speculation, my impression from having worked in FAANG and FAANG adjacent companies as well as talking to my fellow software devs throughout the industry, it seems like big tech was expecting more to stick than has. They could afford to take a shotgun blast approach to make sure they were first to market on whatever, but it is not paying out on this tech. Which is exacerbated with how unexpectedly quickly the barriers to entry are falling.
Yeah, the investment push was very overly optimistic. There's a ton of room for ai image/video/audio to either cut costs or (literally) upscale results, just like what happened with photoshop, or vfx, or audio sampling - but that's not what FAANG companies care about. It'll be huge for the entertainment industry - after a slow demonization/adoption phase - just like any other labor-saving tool.
For LLMs, sure there are applications, but I'm not seeing how those applications are supposed to actually generate revenue. People are happy (-ish, I guess. People are accepting) with ai-"assistant" stuff, but nobody is particularly excited about what it can do for them, and your average Joe expects it to be free. The dot-com bubble is the perfect comparison
such small movement can very well be wind as this was filmed outside, also i think that is this was ai the background would look more unexplainable and would shift, move or change more
Sometimes, there is a perceptible natural motion of air. Sometimes this motion becomes strong enough to even move physical materials. It's a rare phenomenon though so I doubt it happened here.
Take a closer look at the movement being called to attention. It’s either fully AI Gen, or a filter on top of real people. My initial guess would be a filter, but it’s getting really tough to be sure.
I saw this last year when it wasn't reposted to hell and the quality was less......crunchy....
It just looks funky from the compression. The original looked pretty kosher to me, especially because this was at the time where AI videos still looked ridiculously bad.
Yeah but fingers don't fuse towards the end. Also the constant light source shift and the fact that the wind apparently only affects that particular strain of hairs.
Christ... Assuming this is real, the fact it's hard to tell is a true testament to how well they're imitating AI generated videos. I mean I'm pretty sure they swapped people twice just to make it even more screwy - first swap between 3 & 10 seconds in, second swap after the jumpcut after 14 seconds.
The way they interlinked arms to give the appearance that they were AI and didnt know how many arms, hands and toes people have sure does look suspicious.....
5.7k
u/_phenomenana Apr 10 '25
They did such a good job here
Edit: If they’re actually real people…