Sometimes some warm-ups can help people understand what is essential in a proof and what is extra. They can help the simplicity of an idea shine through.
So the first warm-up is a purely geometric proof that the golden ratio is irrational. A number of years ago, I saw such a proof... but the diagram that went with it was laid out on a single line, and it got bogged down in a bunch of notation, and I kind of got it but it certainly didn't excite me.
Then one day I was looking at my business card and I realized that the proof was right there. When I was designing my business card, I tried to figure out a good logo, and I eventually settled on a golden rectangle and golden spiral:
So here's how the proof works. Saying that the golden ratio is irrational is the same as saying that line segments whose ratio is "golden" (AB and BD in the figure above) are incommensurable. Two (or more) segments are "commensurable" (meaning "can be measured together") if there is some (typically very short) ruler segment such that each of the commensurable segments can be "measured by" the ruler segment, i.e. can be constructed by laying the ruler segment end-to-end some (positive integral) number of times (or, equivalently, if it is a union of segments congruent to the ruler, which overlap only in their endpoints). Two segments are commensurable if and only if the ratio of their lengths is rational; if that is not fairly obvious to you, I encourage you to spend some time figuring it out.
So to prove our result, we are going to argue by contradiction, assuming that AB and BD are commensurable. Measure off AB laying out m copies of the ruler segment, and measure off BD by laying out n copies of the ruler segment. It follows that the segment CD is covered by n–m copies of the ruler segment, so CD is commensurable with the two previous segments. Now look at the top segment: DF is congruent to AB, and DE is congruent to CD, so the same argument shows that EF is can be measured with an integral number of ruler segments. The same argument applies to GH. So on the one hand these segments are getting arbitrarily small, and on the other hand they can't get shorter than a single ruler segment, which is the desired contradiction. Nice, huh? What could be simpler!
[OK, for anyone who is a little squiffy about the rigor of "arbitrarily small" in this context, you can look at the sequence of how many rulers it takes to measure off each successive segment. It is an infinite strictly decreasing sequence of positive integers, and good luck finding one of those...]
One major side excursion you could take here is to understand how this picture relates to the fact that the continued fraction expansion of the golden ratio is [1 1 1 ...]. But I'm not going there now.
Something about having the proof laid out in two dimensions and using the spiral pattern of squares managed to make the argument much clearer. And if you're sitting with someone with the picture (or card) in front of you, it's even easier. You just point, and you don't need to label anything. So, after marveling over this for a while, I started to wonder about a purely geometric proof that the square root of two is irrational, or, put geometrically, that the diagonal of a square is incommensurable with its side.
Now I have to admit, this is a perverse question. The algebraic (or perhaps more accurately the number-theoretic) proof is certainly one of the crowning gems of mathematics. Utterly simple, and unimprovable. But the truth is that my mind can be perverse. This question rolled around in my brain for a number of years and for some reason, I got serious about it the other night and figured out a proof I am happy with.
But first, so you don't get freaked out by a mildly complicated picture, let's take another warm-up. Let's look at the circle below on diameter AC (well, semicircle, but you know what I mean):
For any point D on the circle, ADC is a right angle. (If, like me, you don't remember the proof of that, it's fun to reconstruct it.) If you drop the altitude to the point B, you get two additional right triangles, and all three are similar. In particular ABD is similar to DBC, and it follows that c is to b as b is to a (c:b :: b:a). That is, b is the geometric mean of a and c. So already you can turn this into a way to construct the geometric mean of two segments. But also, note that the length of BD is less than or equal to the length of EF, with equality only if a = c. On the other hand, EF is a radius of the circle, so its length is (a+c)/2, since the diameter of the circle is a+c. So in our little warm-up we've proved that the geometric mean is less than or equal to the arithmetic mean (average), with equality only when the two numbers are equal. OK, and here is the only little bit of algebra I'm going to use in this article: if you cross-multiply c/b = b/a, you get b^2 = ac, so b = sqrt(ac), the algebraic formula for the "geometric mean."
So two more things before we get started: first, I'm not actually going to prove that sqrt(2) is irrational, I'm going to prove that sqrt(2)+1 is irrational, and let you take it the rest of the way. Second, I hope you will forgive this kind of crappy drawing, I'm sure there are great drawing programs that could produce this in a snap, but I couldn't find one and learn it in a snap. Make believe that the two horizontal-ish lines are actually horizontal (and therefore actually parallel).
So CDGH is a square with side of length s and diagonal of length d, as is CHIB. The (semi)circle has center C and radius d. Now we are going to focus on the big rectangle ADGJ, whose height is s and whose base is s+d. We will prove that the height and base of this rectangle are incommensurable. When s=1, d=sqrt(2), and we will have shown that 1+sqrt(2) is irrational as promised. As in the first warm-up, we assume that the height and base are commensurable and argue to a contradiction. As in the second warm-up, the heavy-dashed line triangles ADG and GDE are similar, so our big rectangle ADJG is similar to the little rectangle GDEF. And the little rectangle GDEF is congruent to its mirror image on the left, JABI (or if you want to be pedantic, which I know many of you do, the mirror image is IBAJ). So if there is a common ruler segment that measures both AD and DG, it also measures AB, since both BC and CD are congruent to DG. In other words: take the big rectangle ADGJ, knock two squares off of it, and you end up with a similar, smaller rectangle JABI, whose sides are also measured by our ruler segment. Because IBAJ is similar to the big rectangle, knocking two squares off of it gives yet a smaller similar rectangle, whose sides are also measured by the ruler segment. Continuing the process leads the same contradiction as before, with an arbitrarily small segment needing to be one or more of our fixed ruler segments. Done!
So I hope that was easy to follow. By the way, I didn't actually use any result that I proved in the second warm-up, I just used the picture and the idea. But hopefully that—and the heavy dashed lines—helped you focus on the essentials in what otherwise would be a moderately complicated figure. For those of you who are really disappointed to leave that warm-up behind, we have proved that s is the geometric mean of d+s and d-s.
Also, for those of you who dug into the continued fraction handwaving above, we have proved that the continued fraction expansion of sqrt(2)+1 is [2 2 2 ...]. So the continued fraction expansion of sqrt(2) is [1 2 2 ...].
Once I figured this out, I poked around a bit on the internet, since obviously I was not the first to give this kind of geometric proof. I have to say that I was not impressed by the clarity of what I found. I was aided by an offhand comment from a mathematician acquaintance who remarked that this is "anthyphairesis," which turns out to refer to this process of using a "divisor" to lop off segments, and using the remainder as a new divisor to lop segments off of the previous divisor. In our case, we are applying anthyphairesis to the two sides of our big rectangle ADGJ. Our "divisor" is the height DG, and we can lop off two segments of length s (the height) from the base AD, before the remainder, the segment AB, is shorter than s. We now reverse roles, and use AB as the divisor to lop off segments from the height. But the result we just proved—that ADGJ is similar to JABI—shows that we will again lop off two segments, and the process will continue ad infinitum. This is why the continued fraction expansion of sqrt(2)+1 is [2 2 2 ...]; and as a result, the continued fraction expansion of sqrt(2) is [1 2 2 ...].
It is not hard to see that anthyphairesis terminates if and only if the starting segments are commensurable. Applied to segments of integer length anthyphairesis is Euclid's GCD algorithm; when applied to commensurable segments it will actually give you the "longest common ruler." Looking at it this way clarifies the relationship between Euclid's GCD algorithm and continued fractions. It is always humbling to see what the ancient Greeks accomplished more than a thousand years before the invention of the equal sign.
In general I don't like to complain in public about the work of others, especially if I haven't checked it out thoroughly. But my sophisticated search techniques (which you will see if you click on this link) brought me to page 189 of "From China to Paris: 2000 Years Transmission of Mathematical Ideas," by Yvonne Dold-Samplonius, where I found this quote:
Fowler has shown that it is historically misleading to interpret anthyphairesis in terms of continued fraction expansion, because the ancient Greeks saw anthypairesis as a process of subtraction, whereas continued fractions are the result of a process of division [Fowler 1999: 30, 313 (n.13), 366].Now I am fully attuned to the dangers of regarding anthyphairesis as the Greek's failed attempt to do what we do correctly as continued fractions; it needs to be considered on its own merits, separate from how it may or may not map to modern concepts. And I have not read the Fowler 1999 reference where he may say something that makes sense. That said... Hello?! What is division but repeated subtraction until you can subtract no more, leaving a remainder?!
On pp. 2-3 of a PDF on continued fractions, Paul Hewitt does fairly well, showing that if you start with a square with side s and diagonal d, then a smaller square with side s' = d-s has diagonal d' = 2s-d. If s and d are integral multiples of a common ruler, then d' and s' will be integral multiples of the same ruler, which is the result we need. His result starts with the Pythagorean theorem s^2 = 2d^2, and proceeds with a lot of algebra to the result; a key intermediary is the fact, noted above, that s is the geometric mean of d+s and d–s. Note that I have drawn (using a dotted line KL) the new square, and labeled s' and d' above in Figure 3. I put all that stuff in parentheses because we didn't use it in the main proof.
The third reference I found is a 1979 paper by (I assume the same) Fowler; on p. 819 (p. 13 of the PDF), there is some discussion, including a figure which is poorly labeled and explained. He does, however, quote Proclus as saying:
The Pythagoreans proposed this elegant theorem about the diameters and sides, that when the diameter receives the side of which it is the diameter [that is, d–s], it becomes a side [s'=d–s], while the side, added to itself [2s] and receiving the diameter [2s–d], becomes a diameter [d'=2s–d, as in the Hewitt paper].I have added the bracketed text. Two things stand out for me from this passage. First, we tend to take the technology of algebraic notation for granted, in much the same way that we tend to think of pre-industrial agriculture as "bucolic" rather than "technological." But both were massive innovations. Without the bracketed notes, disentangling this paragraph is a significant cognitive load. Algebraic notation bears that load effortlessly. Second, once you appreciate the massive cognitive load the ancient Greeks were working against, their mathematical achievements are that much more awe-inspiring.
OK, let's get back to Proclus's "elegant theorem of the Pythagoreans," to close the loop. Going back to our big rectangle ADGJ, its short side is s and its long side is d+s. Proclus is talking about making a new square of side s' = d–s, which is the short side of the far-right rectangle GDEF. What is the diagonal d' of this square? By our similarity result, the long side of GDEF is d'+s', as pictured. It is also s. So d' = s–s' = s – (d–s) = 2s–d, as we were to have proved. OK, remember above where I said there was only that one little bit of algebra? I lied. No doubt there is a purely geometric way to demonstrate this relationship.
If you have read this far, I salute you! And I hope that you have also gotten some (perhaps perverse) pleasure on this little ramble of ours.
PS Thanks to Dylan Thurston who got me thinking about the second warm-up by posting it on his Facebook page.
PPS When I showed Dylan a draft of this post, he immediately wondered whether it is possible to given an alternate proof using what we might call an "A-rectangle," that is, a rectangle where the ratio of long side to short side is sqrt(2). It has the property that when you bisect its long axis, you get two A-rectangles, at a 90 degree angle from your original. It turns out that A-series European paper (like A4 etc.) has this aspect ratio, so that when you do side-by-side copying of two sheets, the double sheet has the same aspect ratio—a cool fact that I had not been aware of. (Numberphile video here, more history here; it turns out that the aspect ratio goes back to 1786 and specifics of the A-series go back to the early days of the metric system, with Lazare Carnot in 1798.)
He was absolutely right... starting with such a rectangle and subdividing continuously, the proof is immediate. But again, it does rely on a little bit of algebra. If we define an "A-rectangle" as one whose long side is equal to the diagonal of the square whose side is the short side, can you find a purely geometric proof that bisecting such a rectangle divides it into two A-rectangles? It is possible to give a short proof of that fact, and I'll let you have the fun of looking for it, if you want. But that's the thing about math... you work on a problem, come up with a solution you're happy with, and someone who is either smarter or has been exposed to different things (or both) comes up instantly with something simpler. But we get something out of having multiple approaches... in this case, some of the interesting side-notes on anthyphairesis and continued fractions.