What If GPT Didn’t “Learn” — It Just Found a Winning Lottery Ticket?
Everyone says large language models learn.
We train them.
We optimize them.
We scale them.
But what if that story is incomplete?
What if GPT doesn’t build intelligence from scratch…
What if it discovers it?
The Idea That Quietly Shook Deep Learning
In 2018, Jonathan Frankle and Michael Carbin proposed something almost heretical:
Inside every randomly initialized neural network, there exists a smaller subnetwork that is already capable of learning the task.
They called it the Lottery Ticket Hypothesis (LTH).
The claim?
A huge neural network contains a sparse “winning ticket” —
a subnetwork that, when trained alone (starting from the original initialization), matches the full network’s performance.
Why “Lottery Ticket”?
Imagine you randomly initialize a huge neural network.
Most weights are useless.
But hidden inside that random initialization is a
Discussion
Take the lead—comment now
Lead the way—your insights can inspire others.