Error control in mathematical thinking and practice

I’m going next week to have what I expect will be a very stimulating discussion with Dirk Schlimm and Valeria Giardino, and in preparation am thinking through some of Valeria’s arguments about mathematical proof and diagrams.  I’ve realized, in doing so, how some of my perspectives on mathematical practice have changed. (I haven’t read enough of Valeria’s work to know how much what I’ll say replicates her ideas–which in any event are well-worth checking out).

1) On the one hand: Symbols vs. Diagrams

One of the classic arguments in philosophy of mathematics is over the epistemic value of symbolic proofs vs. diagrammatic arguments. Many people in the classical period (say, the early 1900s) argued that symbolic proofs were primary, because they conceptualized mathematics in terms of absolute proof and certainty, and symbolic proofs seemed to them to provide a kind of certainty that diagrams could not–diagrams, after all, can easily mislead.

2) On the other hand: Cognitive Psychology and common sense

More recently, decades of research in cognitive science have established what should be very obvious: symbols can also mislead!  Learners and experts can be misled by the surface form of symbols, and can make plentiful errors even when the symbols are helpfully configured. Indeed, keeping track of symbolic rules seems to be extremely difficult, even with training. My daughter and I capture this in a basic rule of doing mathematics: first you write down your solution, then you look at it to see what you got wrong. (not if you got something wrong, but what).

3) Where does this leave us?

My perspective now is that error control mechanisms–the same kind that guarantee quality control in factories and form the intellectual foundation of null hypothesis testing–are massively important in mathematical practice, and under-appreciated in philosophy of mathematics. Many of our actual bodily practices (such as writing one line of a derivation carefully below another) exist to facilitate comparison and minimize error risk; error risk is one of the key components that must be managed when turning oneself into a mathematical doer.  I guess when I looked last at the symbols-diagrams debate I had not really thought this through, and spent my time taking seriously arguments about in-principle difference in linguistic, visual persuasion, etc.

4) Why did the ‘classical’ mathematicians mess this up?

On my reading now, the early 20th century debate about persuasion and certainty was, at heart, a practical craft-based conversation about error-management. The reason some people have the intuition that diagrams are less secure (more risky) than symbols is that, well, sometimes they are!  But this is a difference in cognitive profile, not philosophical type: e.g., diagrams sometimes involve angles which may appear similar but be crucially different, or involve exemplars (take an arbitrary triangle…) that may yield incorrect generalizations. On the other hand, though, symbols also provide error opportunities that may have been underappreciated: each symbol looks very similar to other symbols, and specific complex transformations are hard to memorize without error. This is part of why kids hate algebra!

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>