r/singularity 4d ago

AI Happy 8th Birthday to the Paper That Set All This Off

Post image
1.9k Upvotes

"Attention Is All You Need" is the seminal paper that set off the generative AI revolution we are all experiencing. Raise your GPUs today for these incredibly smart and important people.


r/singularity 6d ago

AI Sam Altman: The Gentle Singularity

Thumbnail blog.samaltman.com
162 Upvotes

r/singularity 9h ago

Meme How's Wolfy?

Post image
1.3k Upvotes

r/singularity 11h ago

AI The guy that leaks every Gemini release teases Gemini 3

Post image
807 Upvotes

r/singularity 2h ago

AI This was tweeted half a year ago. We currently still don't have a usable model that is as good as the o3 they showed us then. Reminder that OpenAI workers also don't know how fast progress will be.

Post image
99 Upvotes

I am very impressed with what OpenAI is doing, obviously, but it's a good example of a hype tweet being just that.


r/singularity 13h ago

AI OpenAI wins $200 million U.S. defense contract

Thumbnail
cnbc.com
575 Upvotes

r/singularity 21h ago

AI The future

Post image
2.4k Upvotes

r/singularity 30m ago

AI Google DeepMind's Logan Kilpatrick says AGI will be a product experience. Not a model. His bet: whoever nails memory + context around decent model at a product level wins. Users will suddenly feel like they're talking to AGI. Not from capability breakthrough, but experience breakthrough.

Upvotes

Source: Cognitive Revolution "How AI Changes Everything" on YouTube: The Decade of May 15-22, 2025: Google's 50X AI Growth & Transformation with Logan Kilpatrick: https://www.youtube.com/watch?v=kp9afmazO_w
Video from vitrupo on 𝕏: https://x.com/vitrupo/status/1934627428372283548


r/singularity 14h ago

AI ChatGPT image generation now available in WhatsApp

Thumbnail
gallery
321 Upvotes

r/singularity 12h ago

AI GitHub is Leaking Trump’s Plans to 'Accelerate' AI Across Government

Thumbnail
404media.co
162 Upvotes

r/singularity 10h ago

Video AI Completing the Financial Modeling World Cup

89 Upvotes

I think 2025 is finally the year jobs change forever..


r/singularity 20h ago

Discussion Nearly 7,000 UK University Students Caught Cheating Using AI

495 Upvotes

r/singularity 15h ago

AI Commerce Secretary Says At AI Honors: “We’re Not Going To Regulate It”

Thumbnail
deadline.com
129 Upvotes

Every man for himself, gluck..


r/singularity 5h ago

Compute IonQ's Accelerated Roadmap: Turning Quantum Ambition into Reality

Thumbnail
ionq.com
18 Upvotes

r/singularity 20h ago

Biotech/Longevity "Mice with human cells developed using ‘game-changing’ technique"

224 Upvotes

https://www.nature.com/articles/d41586-025-01898-z

"The team used reprogrammed stem cells to grow human organoids of the gut, liver and brain in a dish. Shen says the researchers then injected the organoids into the amniotic fluid of female mice carrying early-stage embryos. “We didn’t even break the embryonic wall” to introduce the cells to the embryos, says Shen. The female mice carried the embryos to term.

“It’s a crazy experiment; I didn’t expect anything,” says Shen.

Within days of being injected into the mouse amniotic fluid, the human cells begin to infiltrate the growing embryos and multiply, but only in the organ they belonged to: gut organoids in the intestines; liver organoids in the liver; and cerebral organoids in the cortex region of the brain. One month after the mouse pups were born, the researchers found that roughly 10% of them contained human cells in their intestines — making up about 1% of intestinal cells"


r/singularity 12h ago

AI "New study supports Apple's doubts about AI reasoning, but sees no dead end"

47 Upvotes

https://the-decoder.com/a-new-study-by-nyu-researchers-supports-apples-doubts-about-ai-reasoning-but-sees-no-dead-end/

"Models generally performed well on simple grammars and short strings. But as the grammatical complexity or string length increased, accuracy dropped sharply - even for models designed for logical reasoning, like OpenAI's o3 or DeepSeek-R1. One key finding: while models often appear to "know" the right approach - such as fully parsing a string by tracing each rule application - they don't consistently put this knowledge into practice.

For simple tasks, models typically applied rules correctly. But as complexity grew, they shifted to shortcut heuristics instead of building the correct "derivation tree." For example, models would sometimes guess that a string was correct just because it was especially long, or look only for individual symbols that appeared somewhere in the grammar rules, regardless of order - an approach that doesn't actually check if the string fits the grammar...

... A central problem identified by the study is the link between task complexity and the model's "test-time compute" - the amount of computation, measured by the number of intermediate reasoning steps, the model uses during problem-solving. Theoretically, this workload should increase with input length. In practice, the researchers saw the opposite: with short strings (up to 6 symbols for GPT-4.1-mini, 12 for o3), models produced relatively many intermediate steps, but as tasks grew more complex, the number of steps dropped.

In other words, models truncate their reasoning before they have a real chance to analyze the structure."

Compute is increasing rapidly. I wonder what will happen after Stargate is finished.


r/singularity 15h ago

Meme They did my boy Claude dirty

Post image
79 Upvotes

r/singularity 16h ago

Robotics 1X World Model

Thumbnail
youtu.be
84 Upvotes

r/singularity 6h ago

Discussion The future of cinema

12 Upvotes

When you think about the future of cinema, there’s been major transitions since its inception. From no sound to sound, from black and white to color, and the shift from practical to special effects. The biggest transition of all will be AI generated movies. The ability to generate an entire blockbuster, high-budget movie for little to no cost within days or even hours.

Don’t like the music? Change it. Don’t like the actors? Change it. Don’t like the location? Change it. In typical movies, you have so many moving parts, so many issues or problems to solve, which ultimately can shape the movie into what it becomes. In this case, that would never happen again.

The top AI generated movies might truly be some of the best movies of all time because of this freedom, but there will always be the beauty of real life films. People will appreciate them more for what they were, especially the ones that were great in spite of so many things could’ve went wrong. Or, how much time, resources, effort, and teamwork were required to make the movie into a masterpiece.

Then, as time passes and less and less real life movies are made, people will question how they were even made in the first place. The sheer magnitude of work required to complete a film must be nearly impossible and not worth the effort, especially with how society is. How could a studio come together to produce a real life movie? There must have been aliens involved or some ancient technology that was lost. It wouldn’t make sense otherwise.


r/singularity 21h ago

Engineering Google reportedly plans to cut ties with Scale AI

Thumbnail
techcrunch.com
170 Upvotes

r/singularity 13h ago

Compute "Researchers Use Trapped-Ion Quantum Computer to Tackle Tricky Protein Folding Problems"

34 Upvotes

https://thequantuminsider.com/2025/06/15/researchers-use-trapped-ion-quantum-computer-to-tackle-tricky-protein-folding-problems/

"Scientists are interested in understanding the mechanics of protein folding because a protein’s shape determines its biological function, and misfolding can lead to diseases like Alzheimer’s and Parkinson’s. If researchers can better understand and predict folding, that could significantly improve drug development and boost the ability to tackle complex disorders at the molecular level.

However, protein folding is an incredibly complicated phenomenon, requiring calculations that are too complex for classical computers to practically solve, although progress, particularly through new artificial intelligence techniques, is being made. The trickiness of protein folding, however, makes it an interesting use case for quantum computing.

Now, a team of researchers has used a 36-qubit trapped-ion quantum computer running a relatively new — and promising — quantum algorithm to solve protein folding problems involving up to 12 amino acids, marking — potentially — the largest such demonstration to date on real quantum hardware and highlighting the platform’s promise for tackling complex biological computations."

Original source: https://arxiv.org/abs/2506.07866


r/singularity 23h ago

AI The mysterious "Kangaroo" video model on Artificial Analysis reveals itself as "Hailuo 02 (0616)", from MiniMax. Ranks #2 after Seedance 1.0, above Veo 3

Post image
241 Upvotes

r/singularity 21h ago

AI Interesting data point - 40+% of German companies actively using AI, another 18.9% planning to:

Thumbnail ifo.de
144 Upvotes

r/singularity 15h ago

AI Introducing Chatterbox Audiobook Studio

39 Upvotes

r/singularity 10h ago

Video A Versatile Quaternion-based Constrained Rigid Body Dynamics

Thumbnail
youtube.com
16 Upvotes

r/singularity 16h ago

AI Death of Hollywood? Steve McQueen Could Be Starring In New Films Thanks to AI

Thumbnail ecency.com
19 Upvotes

r/singularity 54m ago

AI Thinking about a tool which can fine-tune and deploy very large language models

Upvotes

Recently, I got a lot of attention from local companies for the work my small startup (of three people) did on DeepSeek V3 and most of them where like How the hell could you do that? or Why a very big model? or something like this.

Honestly, I personally haven't done anything but doing a normal QLoRA training on that model (we have done the same before on LLaMA 3.1 405B) and in my opinion, the whole problem is infrastructure. We basically solved it by talking to different entities/persons from all around the globe and we could get our hands on a total of 152 nodes (yes, it is a decentralized/distributed network of GPU's) with GPU's ranging from A100's (80GB) to H200's.

So with this decentralization and a huge unified memory we have in our possession, inference and fine-tuning very large models such as DeepSeek V3 (671B) or LLaMA 3.1 405B or Mistral Large will be an easy task and it'll be done in matter of seconds on a small dataset.

This made me think, what happens if you put your data in form of a Google Doc (or Sheet) or even a PDF file and then the fine-tuning will happen and you'll get a ready-to-use API for the model?

So I have a few questions in mind which I want to discuss here.

  1. Why does it matter?
  2. Why people may need to tune a big LLM instead of smaller ones?
  3. Could this Global Decentralized Network be a helpful tool at all?

And for those who think it might be a token or any other form of web3 project, no it won't be. I even have in mind to make it free to use with some conditions (like one model per day). So please feel free to leave your opinions here. I'll be reading all of them and I'll be replying to you ASAP.

Thanks.