Some math facts feel like magic tricks. The claim that
is one of the biggest. Your instincts should scream “impossible,” because the partial sums clearly blow up to infinity.
And yet: keeps showing up in serious places, especially in physics and analytic number theory. The punchline is that this is not a normal sum. It’s a regularized value assigned by a specific method (often connected to Ramanujan and to the Riemann zeta function).
What people mean (and what they don’t)
Let’s be brutally clear: in the usual sense of summing an infinite series,
- The series diverges.
- Saying it “equals” is false if “equals” means “converges to.”
So why does anyone write it?
Because mathematicians (and physicists) sometimes do something subtler: they take a divergent series and apply a regularization scheme—a rule that:
- agrees with ordinary sums when a series actually converges, and
- extends the idea of “sum” to some divergent cases in a consistent, useful way.
That extended value is what’s being reported. It’s like saying: “This object is too wild to measure directly, so we measure it using a calibrated instrument that also works on well-behaved objects.”
The instrument here is often zeta regularization (and Ramanujan had related ideas about assigning finite parts).
Three “sums” hiding in one sentence
When someone writes “,” they’ve smashed together three different concepts:
Ordinary sum (convergence): diverges → no finite value.
Summation methods (like Cesàro, Abel, Ramanujan summation): Some divergent series can be assigned values by averaging or smoothing procedures. Not all methods apply to all series, and different methods can give different “assigned values.”
Analytic continuation via the zeta function: The Riemann zeta function is initially defined (as a convergent series) for by . Then it is extended to other by analytic continuation. This extension yields . And since formally , people write the dramatic slogan.
The slogan is a shorthand for:
“Under zeta-function regularization (and related finite-part ideas), the series is assigned the value .”
The clean derivation (a worked example you can follow)
Let’s derive in a way that doesn’t require heavy complex analysis machinery.
We’ll use the Dirichlet eta function, which behaves better:
.
For , this series converges (nice!), and it relates to by:
.
Now plug in . The relation becomes:
.
So, if we can figure out , we immediately get .
But what is ? By definition:
This is still not convergent in the usual sense, but it is Abel-summable (a very standard “smoothing” method). Here’s the trick.
Consider the power series for :
.
We can compute exactly using a known geometric-series identity.
Start with:
.
Differentiate both sides:
Multiply by :
.
Great-so for ,
.
Now comes the “regularization” step: Abel summation asks what happens as (approach 1 from below, staying inside the radius of convergence). Then
.
So we take in this Abel-regularized sense. Plug into the earlier relation:
.
That’s the famous number.
Important nuance: we did not prove the ordinary sum equals . We computed a consistent regularized value via a smoothing limit and a functional relation.
So where does Ramanujan come in?
Srinivasa Ramanujan worked with divergent series in a way that often aimed to extract a meaningful finite part from expressions that blow up. In modern language, what’s commonly marketed as “Ramanujan’s theorem” here is really a family of ideas around:
- assigning values to divergent series in a structured way,
- manipulating generating functions,
- and extracting constant terms / finite parts after removing divergences.
Different authors use “Ramanujan summation” with slightly different formal definitions, but the vibe is consistent: don’t pretend the divergence isn’t there-separate the infinite growth from the finite residue.
Zeta regularization is one particularly clean, widely used modern framework that lands on , matching what people often attribute to Ramanujan-style finite-part reasoning.
Why physics keeps bumping into -1/12
If you ever see in physics, it’s usually because someone is summing (or “summing”) an infinite tower of modes-like vibrations of a string or quantum field fluctuations, and the raw sum is infinite.
The workflow is often:
- Write down an infinite sum that diverges.
- Use a regularization method (zeta regularization is a popular choice).
- Extract the finite part that influences measurable quantities.
This doesn’t mean nature is secretly adding on a calculator and getting . It means the finite remainder after a principled subtraction is what shows up in predictions.
One famous appearance is in the mathematics behind the critical dimension in bosonic string theory, where a normal-ordering constant involves a regularized sum of positive integers. That constant is tied to . The details depend on the model, but the repeating theme is the same: “infinity management” done consistently.
How to talk about it without lying (or killing the fun)
Here’s a responsible way to keep the wonder and keep the truth:
Say:
- “The series diverges, but under zeta-function regularization it is assigned the value .”
- “ matches where it converges, and analytic continuation gives ”
- “This value is useful in contexts where you extract finite parts from divergent expressions.”
Avoid:
- “The sum is literally in the usual sense.”
- “Infinity equals a negative number!” (sounds cool, but it’s misleading.)
A good mental model: regularization is like extending the notion of “area” to shapes with infinite spikes by subtracting the spike in a standardized way. You’re not denying the spike-you’re defining what you mean by the finite remainder.
Key takeaways
- In the ordinary sense, diverges and does not equal a finite number.
- The value comes from regularization, most famously via .
- A clean route is: Abel-sum , then use to get .
- “Ramanujan summation” is part of a broader idea: extracting a finite part from divergent expressions in a structured way.
- The result is useful because many real problems require consistent handling of infinities, not because the usual sum is secretly negative.
