1) On the one hand: Symbols vs. Diagrams

One of the classic arguments in philosophy of mathematics is over the epistemic value of symbolic proofs vs. diagrammatic arguments. Many people in the classical period (say, the early 1900s) argued that symbolic proofs were primary, because they conceptualized mathematics in terms of absolute proof and certainty, and symbolic proofs seemed to them to provide a kind of certainty that diagrams could not–diagrams, after all, can easily mislead.

2) On the other hand: Cognitive Psychology and common sense

More recently, decades of research in cognitive science have established what should be very obvious: symbols can also mislead! Learners and experts can be misled by the surface form of symbols, and can make plentiful errors even when the symbols are helpfully configured. Indeed, keeping track of symbolic rules seems to be extremely difficult, even with training. My daughter and I capture this in a basic rule of doing mathematics: first you write down your solution, then you look at it to see *what* you got wrong. (not *if* you got something wrong, but *what*).

3) Where does this leave us?

My perspective now is that error control mechanisms–the same kind that guarantee quality control in factories and form the intellectual foundation of null hypothesis testing–are massively important in mathematical practice, and under-appreciated in philosophy of mathematics. Many of our actual bodily practices (such as writing one line of a derivation carefully below another) exist to facilitate comparison and minimize error risk; error risk is one of the key components that must be managed when turning oneself into a mathematical doer. I guess when I looked last at the symbols-diagrams debate I had not really thought this through, and spent my time taking seriously arguments about in-principle difference in linguistic, visual persuasion, etc.

4) Why did the ‘classical’ mathematicians mess this up?

On my reading now, the early 20th century debate about persuasion and certainty was, at heart, a practical craft-based conversation about error-management. The reason some people have the intuition that diagrams are less secure (more risky) than symbols is that, well, sometimes they *are*! But this is a difference in cognitive profile, not philosophical type: e.g., diagrams sometimes involve angles which may appear similar but be crucially different, or involve exemplars (take an arbitrary triangle…) that may yield incorrect generalizations. On the other hand, though, symbols also provide error opportunities that may have been underappreciated: each symbol looks very similar to other symbols, and specific complex transformations are hard to memorize without error. This is part of why kids hate algebra!

]]>

My daughter is studying order of operations right now, and I made her a page of problems in Graspable Math to solve, and thought I’d share them publicly, in case someone finds them useful someday.

If you don’t know Graspable Math, the big thing you should know is that it’s a dynamic software–it let’s you control the actions taken in an algebraic setting, but you don’t have to write them yourself. The program was written and conceived by Erik Weitnauer, Erin Ottmar, myself, and some other folks. It allows you to do actions that aren’t strictly PEMDAS, as long as the action yields the PEMDAS-approved answer, so YMMV, depending on your personal pedagogical goals. Here’s a video demonstrating the basics of this particular page:

And here’s the page itself:

And here’s her actual homework; she’ll use this to check her answers:

]]>I’ve been playing around with making representations and demos in Graspable Math, a research project and free teaching tool my post-doc Erik Weitnauer, Erin Ottmar, and I are making together (all beta disclaimers apply). I thought I’d make a quick proof of Viviani’s theorem, and I was pretty pleased with the result. If you want to play with it yourself, here’s a link to the canvas–but be forewarned, as of Dec 1, 2016, there’s some glitch with our saving and loading, which breaks some of the links. You’re better off deriving the proof yourself next to the proof that loads. I demo that here.

Viviani’s theorem has some funny implications. For instance, barycentric coordinate plots–the coolest way to plot three values constrained to a constant sum–couldn’t work without them.

These plots come up all the time in my work, because we often have tasks where subjects have to choose among three items. They are also the best way to think about how you pay your money to humble bundle: you gotta pay all your money, so the sum is fixed, but the values are ‘free’ to vary.

]]>

It’s pretty clear from counting triangles that the answer is 1/8, but I like algebra, so I made a quick solution in Graspable Math, and learned something from it, so I thought I’d share.

Here’s the basic idea:

So at first glance, this is one of those proofs that feels like drudgery–didn’t the geometric flipping thing make more sense?–But the great thing about algebra is that it gives you new insights you wouldn’t have probably had otherwise. Let me ask you this: *why *is it 1/8th? Why did *this* construction lead to *that* ratio? Counting provides little insight–it just is. now look at the algebraic proof, and you get a quick hint: The 8 came from three factors of 2.

Two of these 2’s appeared from the size difference between the large and small squares, and one appeared because we used the pythagorean theorem–essentially because we rotated the square while keeping it bounded in extent.

We can picture it like this: we started with the big square, then we shrunk it down, and rotated it in a particular way (keeping its total width fixed). The first of those operations took the area down by a factor of four, and the second by a factor of two.

This immediately suggests, and tells us the answer to, two related problems, both of the "find the shaded area" type:

and

Some people say that algebra ‘proves things rigorously’. Maybe, but the real advantage of algebra is that it gives you the opportunity to see things in new ways, that help you understand **why **things are the way they are. Of course there are lots of answers to the ‘why’ question, and indeed lots of ways to get to the three magical 2’s in this example. But the insights we get from algebra tend to be compelling, non-obvious, and powerful. And that’s what algebra is for (okay, that and a lot of other things too).

Look here:

As Snopes notes, 1.3 Billion is 13×10^8, while 300 million is 3×10^8, so this is $4.33 per person, not $4.33 million (shameless plug: you can see this nicely rendered using our Graspable Math app). As you can see, people who buy this are doing the basic arithmetic just fine, but are off by a factor of 1 million–they just don’t know how to relate the scale words, even though they know how many zeros. Of course, people may really *not* believe it–but our data is consistent with the idea that many people would.

Apparently, this has come up before. Again from Snopes:

]]>

Of course, something can be a myth either because it is an unattainable ideal, or because it is fundamentally wrongheaded. Some people have suggested that sure, it might be that even abstract ideas still retain *some* concrete features, but have fewer of them, somehow. So ‘abstraction’ is, itself, an abstract principle which maybe never happens purely in practice, but is still the limit of real abstractions in the same way that a Platonic circle is the limit of real circle-like shapes. In contrast I’ve usually argued that the myth of ‘abstraction’ is the latter–fundamentally wrong-headed, at least an an explanation of interesting complex reasoning in supposedly abstract fields like math or physics. Maybe this feature-stripping thing *happens*, but it’s not what we call "abstraction" in mathematics.

On my account, abstraction in mathematical reasoning usually involves a transformation in which powerful intuitions about a situation are gained by *replacing* one set of features with another. Abstracting is what happens when you take a situation with lots of distracting features, and find a new way to express that situation. This new way still has lots of features, but either they are quite different (and so differently distracting) from the features of the first situation, or the features are arranged so as to trigger perceptions and cognitions that are helpful, rather than harmful, in dealing with the appropriate implications. So abstraction is not (as the myth would have it) about removing features to isolate relations; instead, it’s about managing features to get relational work done.

An example is probably in order, and fortunately I have one handy. This will look totally mundane to anyone who has ever done any recreational math, but maybe is still useful for purposes of illustration. My daughter and I have been on one of our regular Vi Hart kicks, and got into hexaflexagons. In short, these are very foldy pieces of paper, which loop through a number of dynamic states. We built a few (all trihexaflexagons, for those playing at home), and started playing with them to see what we could figure out.

There are a few ways to figure this out. We went with a simple empirical ‘feature augmentation’ strategy. Initially all the faces look like blank paper–not very feature-rich, and not very useful for counting.

We colored each bare face a different way, until we ran out, after six colors, of new faces. Of course, we didn’t *know* we had them all, but we did, so that was alright.

Initially, my daughter thought that all of the faces were basically identical. In order to explore this, and to understand how each face connects to each other face, we again used an empirical strategy: we built up a network graph of face connections. Basically, we used a blank piece of paper, and drew a circle in the color of the face we currently had up, and then rotated the flexagon to get a new face. If we had not yet drawn that face on our paper, we drew it, and connected the two faces (old and new) with a line. If we had, we just drew the line. After a while, we got something that looked like this:

Then we put the flexagon down, and started playing with the graph. Of course, we didn’t ‘care’ about the lengths of the lines, or the arrangement of the nodes, in the sense that we could have drawn them any way and had the same mapping from flexagon to graph. In another sense, we *did care* about this arrangement. The current arrangement has ugly features that aren’t very useful. So we drew it about 4 times, each time trying to untangle and regularize the lines. (Notice carefully the main thread: ‘untangling’ is a very feature-based activity, and so is ‘regularizing’. It’s just that the features are those of circles and lines and the constraints of graph-games, not those of flexagons. This is the core point I’m after). At the end, we got this:

Cool! The flexagon is a triangle!! A triangle of…something. We didn’t quite know or care what, at least not at first.

When we first drew the triangle, we didn’t have the arrow heads. Our next activity was to try to draw the direction of the state transitions (we had to be careful at this point, because the directions reverse if you flip the flexagon over). When we did, it immediately became obvious that the outer loops revolved in the opposite direction from the inner loop. That is, the inner loop was special–no rearranging of the graph could make this cycle the same as the other 3 cycles of three elements. Could we have figured this out if we had remained locked in the feature-space of the flexagon? Surely not. But could we have figured it out without a wonderful visual system capable of processing loops and triangles and orientations, and a good graph? Again, it seems unlikely. **Abstractions is the construction of useful features, not at all the absence of features**.

Once you have the triangle representation, a new question arises: what makes the corner states (which connect to only two other states) different from the midline states (which have four connections)? Cecily took her flexagon to school the next day, and figured one interesting thing out (I should mention that most of this work started after midnight–so she was probably pretty tired. Sorry Ms. Myers!!). Cecily noticed that while the corner states have three mountain folds and three valley folds that define the triangles that make up the hexagon (they are really not valley folds, exactly–more like trench folds–but the image is about right), the edge states have two sets of valley/trench folds. This "explains" why the corner folds have fewer outgoing transformations than the edge folds: you can transform the flexagon along any set of three trench folds, but not along the mountain folds.

The cognitively interesting point is that abstractions are bidirectional: observing the features of the flexagon enticed us to make the graph; then observing features of the (triangular) graph enticed Cecily to notice and care about differences she’d originally ignored in the flexagon. Now we can say "two ways to fold" to generally mean "multiple options or ways to go from here".

Nothing. Almost any mathematics educator or mathematically inclined person would tell you that this is just how it works. Mathematical understanding involves concrete, simple observations on ever-shifting concrete representations. We call it "abstract understanding" when the features of the derived case (the triangle-shaped graph, in this case) aren’t obviously literally present in the original case. But we’re *never* reasoning very far from the surface form.

Just to point it out, if Cecily and I go on to explore hexaflexagons further, we’ll start doing it algebraically, using concepts from groups and permutations. This won’t change anything fundamental; we’ll just swap out the regularities of circles and lines for those of letters and visual patterns. We’ll still be swapping one set of features for another, because that’s just how it works.

]]>

So, recently some very smart folks at Chicago published a fascinating study on the effectiveness of *Bedtime Math*, a set of math books and apps that encourage you to replace some of your bedtime story reading time with bedtime math reading time. They find very encouraging results in an impressively large randomized field study (See Figure 1, reposted from the article).

Unfortunately, their study suffers from a severe oversight. The authors failed to report on fully half of the most relevant data regarding *Bedtime Math.* A fuller examination including both sets of data should lead us to question the overall value of the intervention. Clearly, I hate to accuse my colleagues (and in this case friends) of failing to report relevant data—a serious charge; I wouldn’t do so without extraordinarily good evidence. But I happen to have excellent evidence. You see, I purchased Bedtime Math several months ago, and read it with my children frequently. I can say from this *n=2* field study, that the Bedtime Math books clearly fail at their intended purpose.

It’s not that children don’t learn* math *from these books. I have no reason to doubt the results presented in the article. No, no–The dangerous lie of so-called ‘Bedtime Math’ is in the first half of its name, as a brief examination of Figure 2 (which does not appear in the final version of the article) will show.

You see, *kids just don’t **go to bed* after this “Bedtime” activity. First, the problems are funny, clever, and delightfully (for a 6 year old) gross. Worse, math is an intrinsically active behavior. Sure, when you read a story, a kid may have questions, add new ideas, and so on—but they can also enjoy the story while being quiet and still, snuggling and listening, gently drifting off to sleep. Math isn’t like this. You can’t do math unless you think, count on your fingers, wonder about new interesting things, suggest new and different problems, interrogate the structure of the situation—nothing that is even *remotely* *like* falling asleep. Kids often love this kind of mathematical play, so it revs them right up, just at the time you’re ready to settle down for the night. If the authors had included data on sleep-deprivation in the exhausted parents of Bedtime Math readers, they would have come to *very* different conclusions.

It’s too late for me: my son demands every night that we either reread Bedtime Math, or Bruce Goldstone’s equally engagaing books *Great Estimations* and *Greater Estimations*. Neither path leads to sleep. Please: save yourself. Don’t buy these books.

Obviously, I’m a fan of these books and this work. I *do* believe, though, that there is substance in this observation. We–both we the public and we the research community–often misunderstand the nature of mathematics, and think of it as something that is like reading, or like thinking. It’s more like painting. It’s an active, engaged process best done with a clear mind and on the move. In its best moments, math is artistic, clever and funny. I hope that beyond helping math anxious parents to help their children succeed in school calculations, books like these can help parents and kids access the delight that people who love math feel about it–whether in the morning or the evening.

** Update Notice: The original title of this post, “The Dangerous Lie of Bedtime Math”, was changed at the request of the Bedtime Math Foundation. I found their reasons compelling, and agreed to change it to something that would be a little less misleading out of context. They also mentioned that Laura Overdeck DID successfully put her own children to sleep for many years with her drafts.*

What’s weird to me is how many news outlets seem unable to reason for themselves about this, and to realize that this money just can’t be ‘lost’. All it takes is knowing the single-year budget for the pentagon, roughly, and multiplying by, roughly, 20. I understand how intelligent readers can be mislead, but how can so many headline and content writers not stop to evaluate or think about the numbers they are writing about?

The reuters article would like you to consider the entire pentagon budget "unaccounted for”, which is true in the sense that the article details, that the Pentagon’s myriad accounting practices are very shoddy—but it’s *nothing like* the common person’s understanding of the word ‘lost’.

For instance, from the original article:

**Q: How much taxpayer money has the Defense Department spent that has never been audited since the 1996 deadline?**

A: About $8.5 trillion.

True, but not exactly transparent. The article doesn’t give any estimate of plausible bounds on the total error, leaving the reader in a murk of uncertainty. Not giving any summary of the total error allows a particular sort of dishonest move, common to large-number conversations: In this move, we leap from tiny uncertainty to complete ignorance. The trouble is that the practical on-the-ground truth is that accounting is always filled with small uncertainties–even post-auditing. This is just the truth of life, and by itself, isn’t problematic. Lot’s compare a few statements which are comparable in terms of relative uncertainty:

**What did the DOD spend it’s last 10 billion dollars on?:**It isn’t known**How old is your ten-year old child, in minutes?**What’s that? You don’t know? Then your child’s age*isn’t known.*- Exactly how many words did Shakespeare write, including in his letters, journal, diary, and receipts? Oh dear, the literary output of Shakespeare is
*totally unknown-it’s all gone.* - What’s Newton’s constant of gravitation? According to the National Institute of Standards and Technology, and the dishonest logic I’m mocking here,
*we have absolutely no idea*. We lost it.

You see my point? We just can’t turn errors on the scale of 10^-5 into complete failures of data tracking. Both may be bad, but they aren’t the same.

To be fair, the accounting practices *do* appear to be pretty bad. Unfortunately, the authors of the Reuters article do, as far as I can tell, no summary estimate of how bad the problem might be. . The individual errors reported actually mostly seem very small, from the particular accounting-relevant perspective of tracking how much money is lost. For instance, the article notes " In the Cleveland DFAS office where Woodford worked, for example, “unsupported adjustments” to “make balances agree” totaled $1.03 billion in 2010 alone, according to a December 2011 GAO report.” Well, that sounds bad—but DFAS handles, apparently, the full federal budget of $500 billion. This would be like finding that a moderately wealthy family could not report $150 of spending over a year, and concluding that they had squandered their whole $80k income. Caveat: the Cleveland office is just one office, so the total may be more like $1000 out of $80k, say. This is rather more than I spend in a year on coffee, but not much more.

The article is inflammatory, fascinating, important, hugely misleading, and I think largely right in its broader message. It’s important. Read it. What bothers me, though, is just that if anyone thought about the magnitudes of the numbers they were talking about, they would come to very different conclusions than anyone in fact seems to be coming to. Don’t come away thinking that the Pentagon spent half our national debt on corruption and graft: come away thinking that the Pentagon has standards of accounting that are fairly typical of your average small business, and that we probably want them to be higher than that (and the budget itself to be lower!!).

Just to be really specific:

The watertown daily times says :"The Pentagon can’t account for $8.5 trillion it spent (Reuters), money that might better have covered the Department of Veterans Affairs’ $2 billion budget shortfall". This is double-counting nonsense. The $8.5 trillion already includes whatever was spent on DoVA, and would not matter what. It’s the whole budget, as common sense would have told you.

Daily kos asserts "Combine "Known" Pentagon waste (like the 1.5 Trillion dollar F35) with missing pentagon money and you have a good chunk of our entire national debt represented. " Only, you guessed it, you can’t combine any money paid for the F35 with the $8.5 trillion, because the $8.5 trillion already includes it. Actually, I guess this might be okay, because as far as I can tell, the cost referred to includes about $1 trillion in costs that would be paid over the next 30 years. Still can’t really include it in our national debt.

Daily kos also says ""Oh really, you’re concerned about deficit spending and the debt? Fully 1/3 of the national debt it is money we sent the Pentagon and they can’t tell us where it went. It’s just gone." This is right, if by ‘just gone’ you mean ‘tracked only quite a lot more accurately than anyone I know tracks their own expenses."

Sigh.

]]>First, though, some quick impressions: It was exciting to see *so* much mathematical and numerical cognition going on. There were many fascinating posters, and also several great talks, a symposium, and a keynote focused on mathematics. People are saying exciting, new things, and it was fascinating to hear. I particularly enjoyed, as did many people, Kevin Mickey’s demonstration of the power of the unit circle as a central representation for trigonometry.

The unit circle is very useful among even expert reasoners working in trigonometry. For instance, the instance below was spontaneously produced by a graduate student in mathematics solving a problem involving quite elementary trig identities, while participating in one of my studies this summer (more on that project another time),

It can be seen more purely here

For those who haven’t seen it, the unit circle is used to motivate the notion of sin, cos, and tangent, in the following way: the circle has a radius 1. For each point that lies on the circle, the sin is defined as the projection of the point to the y axis, and cos as the projection to the x axis. Zero degrees is defined as the point lying along the positive x axis, and positive angle goes counterclockwise around the circle. The image shows that this definition allows the meaningful embedding of right triangles (but not other shapes) into the image, indicating the relationship between right triangles and trig functions.

Kevin Mickey (along with his mentor, Jay McClelland) demonstrated that people who rely on the unit circle to infer formal^{1} rules, such as sin(-x) = -sin(x), do so very successfully, and, crucially, more successfully than those who rely on their memory for such rules. Furthermore, teaching people the unit circle interpretation helps them also become more successful. It’s a cool result, and certainly serves as a demonstration of the power of the unit circle. In time, the unit circle may join the number line, the Cartesian coordinate system, and the function in the pantheon of immensely potent mathematical visualizations.

But what *is* a visualization, anyway? What makes one powerful? Why are the unit circle and the number line so *good*, and other strategies, such as remembering and applying the axioms sin(-x) = -sin(x) and cos(-x)=cos(x) so *bad*? Why are these representations so successful in grounding other ways of thinking?

There’s a common story here, which I heard repeatedly at cognitive science this year. It’s a bad story, and I’m afraid the ways that it is bad slow advancement in mathematics education, especially in teasing apart concepts like embodiment, grounding, and concreteness. In what follows I’m putting together a number of different conversations I had over the conference, so I cannot attribute this argument to any one individual. For concreteness, I’ll attribute it to Frankenstein. Frankenstein’s story goes like this: Frankenstein thinks number line and the unit circle are *meaningful*, while other representations such as axioms or ‘formal notations’, are *meaningless*. In general, Frankenstein thinks, we want to ground meaningless symbols in meaningful ones. Trying to work with meaningless systems is more-or-less hopeless because meaningful situations are the source of and license for moves made in meaningless systems.

Of course, Frankenstein must explain how some situations come to be meaningful, and others come to be meaningless. Sadly, Frankenstein has a ready answer in a simple neo-Fregean notion of a syntax/semantics distinction, and this is where Frankenstein goes wrong. On this account, which has come down to us through modern philosophers of language and mind such as Steve Pinker, and John Searle, and Jerry Fodor some chunks of the physical world have semantic content and others do not—that is, some *refer*, and some are *referred to*. For our purposes, we can call the referring objects ‘symbols’, even though sometimes we want to make finer distinctions. Real situations and objects have intrinsic or inherent meaning, in virtue of the relationships in which they enter. Dogs are *doggy*. The word ‘dog’, on the other hand, is arbitrary—it just acquires its meaning parasitically through its association with real dogs. A great and powerful thing about this story is that it can be used to explain mathematical symbols, verbal symbols, and the symbols of the mind, all using the same coherent notions.

Frankenstein says that the unit circle and the number line are good groundings because they are ‘real’, or anyway ‘closer’ to real, while symbolic forms just get their meaning through reference to real situations. Symbol manipulations just create a Chinese Room of dancing symbols, but can never reach out to real meaning.

The breakdown of the world into the symbolic and the symbolized has had a long run of popularity, but is currently contentious. Many clever folks have now realized that symbols such as ‘dog’ have strong internal structure, important properties, and engage in particular kinds of relations with other words, which gives them their own kind of intrinsic meaning—a logic laying the ground for complex statistical analyses of the distribution of words in rich languages. Furthermore, these days we question whether the mind really *has* something like pure referential symbols, often arguing instead for locking or coupling relationships between distinct intrinsically autonomous but richly interconnected physical systems. Weirdly, many of the people who happily espoused the neo-Fregean view of, say, the unit circle are the *very people* who are very skeptical of this distinction in other walks of (scientific) life. This is strange, because while personally I think that there *might* be symbols in the mind, and this might even be the only right way to think about language, the one place there pretty clearly *aren’t* referential symbols is in notational mathematics.

Despite its long heritage and good family, the notion that algebraic notations are meaningful in virtue of their reference to ‘real’ situations is misleading and unnecessary. Furthermore, it obscures the nature and value of forms like the unit circle, and hides the real value and importance of grounding. It also blinds us to the value of the formal algebraic notation.

**On the contrary:**

The real unit circle diagram:

- Is profoundly useful as a grounding for a trigonometric algebra, but
- Is not ‘close’ to perceptual experience, instead requiring substantial training of spatial systems to use correctly, in part because of how its use is constrained by mathematical truth.
- Bears both intra-systemic content and content derived from its connections to other systems, including formalizations, through modeling relationships.

Real algebraic formalizations:

- Do not refer. Instead, systems of them can bear modeling relationships to other systems.
- Are restrained visuo-spatial forms, with non-arbitrary structure and intrinsic meaningfulness.
- Are not ‘far’ from perceptual experience, instead being themselves spatially extended structures affording spatial transformations, but requiring substantial training of spatial systems to use correctly, in part because of how their use is constrained by mathematical truth.
- Bear both intra-systemic content and content derived from its connections to other systems, including formalizations.

As you might notice, those descriptions are very similar. That’s because there are NO qualitative differences between the unit circle and an algebra. There *are* quantitative difference, which we overlook at our peril, and which Frankenstein’s belief in the referentiality of symbols blinds him to. Both belong to a family of cultural artifacts, which we might call ‘mathematical systems’.

Mathematical systems are like this: they are imaginary structures, which are imagined using principally spatial reasoning systems, and perceptual routines. That is, one imagines spatially extended mental objects, and reasons about them by making mental operations like affine transformations, mental visualization, marking off, zooming, shifting attention, and so on. Importantly, however, mathematical systems are not objects given by experience, and they do not match precisely, ever, the usual processes of the spatial reasoning systems involved in them. Rather, those systems must be trained^{2} to allow only certain extreme processes, and to disallow others. To give some obvious examples: in the unit circle, one may not ‘zoom in’ to the lines of the axes to use their spatial extension, nor the circle itself. One may not move the triangle to new orientations such as placing the right angle at the origin (though in the geometry of congruent forms such a transformation is actively encouraged). One does not consider length of the chords cutting from the intersections of the coordinate axes and the circle (length root 2, of course). One certainly does not rotate the circle into three or four dimensions. If one does these things, one is no longer playing the trigonometry game—one has left the system. There are many other things one *may* do, but only carefully, and which are not part of the usual practice-considering the length between the x-axis projection of a line and the circle, for instance.

Formal notations are also mathematical systems: one imagines a ‘world’ made up of squiggles, and one allows only certain spatial transformations of those symbols. One then considers the inhabitants of that world. Certain spatial routines are allowed, others are not. Importantly, these often explicitly (and always implicitly) involve the specific written shapes. For instance, consider a Cantor’s diagonalization proof^{3} we explicitly consider the *digits of the real number*, and physically imagine *a book of written of symbols.* At the least, one must agree that Cantor’s diagonalization—a proof so profound and fundamental it arguably plays the unit circle role for a broad swath of infinite cardinality theory and computational undecidability proofs—explicitly trades in the visual form of supposedly ‘arbitrary’ and ‘intrinsically meaningless’ symbols. Note also that *nothing* in Cantor’s proof reaches out to the supposed ‘meaning’ of the reals. We just don’t care whether these are conceptualized as points on a line, or magnitudes, or what. In fact, worse for the neo-Fregean than this, we *do care* that they are *not* any of these things—the decimals are a particular manner of constructing strings—they are the symbols on the page. (A minor tweak is required to actually mesh this proof with common models such as points on the line).

Frankenstein wants to call foul at this point. Frankenstein worries that Cantor’s proof is not axiomatic, but rather reasoning about the forms of the formalisms. Maybe, but this kind of reasoning is at least ubiquitous and foundational in work with formalisms, and is frankly probably important in modern mathematics than axiomatic work. And the neo-Fregean ‘arbitrary symbol with derived meaning through referentiality’ really has no good account for this sort of thing. This is a problem, because *lots* of reasoning in abstract algebra, category theory, proof theory, and other pretty common branches of mathematics has just this sort of quality: one reasons—formally or informally—about the properties of jumbles of symbols under certain allowed transformations. This means that the neo-Fregean is unable to cope with most of what goes on in modern algebra.

Furthermore, when pressed, Frankenstein has no good account for how *actual humans* engage in symbolic transformations of the algebraic sort other than the one listed above: constrained perceptual-motor transformations over real or imagined symbolic forms. So the the ‘mathematical systems’ account nicely explains both axiomatic and non-axiomatic approaches to formalisms, while the Frankenstein account explains neither.

Frankenstein is confused because in real life, especially in the very elementary mathematics most of our subjects trade in most of the time, usually symbol systems are embedded in particular mappings, or modeling relationships. We think of multiplication as area, or as repeated addition—we think of number strings as reflecting points on a line, or algebraic expressions as capturing relationships among collaborating painters or moving trains. And mappings really *are* very important in modern mathematics—indeed, one way of construing category theory is that that it is the study of such mappings. It’s just that these are not taken by the advanced mathematician for meanings—multiplication does not ‘mean’ repeated addition, any more than the real numbers ‘means’ the points on a line. Real numbers, like any other mathematical concept, have no meaning. Instead, they may have definitions *within* a mathematical system, and models *across* two systems.

In a mapping situation, one takes two mathematical systems, and finds ways to embed one into the other such that conclusions drawn in one system have a natural interpretation in the other. A simple example might suffice. I’ll take one in which, arguably, the visualizations over the unit circle are unfamiliar and idiosyncratic, but the symbolic transformations are familiar and reassuring:

Take the unit circle, with an arbitrary triangle inscribed.

Now, take that triangle, and flip it over. Then put the point that used to lie on the circle instead at the origin, and placing the point that used to lie at the origin on the unit circle.

Now make two squares, both with one point at the origin, and both with one point at the point that has the *height* of one triangle, and the *width* of the other, like so:

Now, what’s the combined area of the two blue squares? You may be able to work this out through some geometric considerations. Here’s an easy algebraic way: The two squares have areas cos(x) * sin(90-x) (because the angle of the origin point of the old triangle is 90 minus the new angle), and sin(x) * cos(90-x). Then because of the relation between sin and cos,

Isn’t that pretty? Adding squared cos and squared sin is (if you’ve done much trig lately) a very familiar and comforting proof. The things I was doing with squares and flipping triangles was weird. Be that as it may, I made a convincing correspondence between transformations in system 1 (the circle) and in system 2 (the symbols), such that one can import conclusions from one into the other.

Frankenstein is no fool: this looks a lot like a referential relationship. But it’s not: it’s a truth-preserving mapping between two autonomous (spatial/perceptual/dynamic) mathematical systems. Mappings like these go in lots of directions, and often never involve symbols. When symbols are involved, sometimes its through reasoning about axioms and doing lots of substitution, as in the above, sometimes it’s through reasoning about their constituents, as in Cantor’s proof. But this is not a syntax/semantics relationship—it’s a syntax/syntax *alignment*. Those tend to be useful, for a number of reasons I won’t go into here in detail, but two big ones are that error-prone transformations in system (a) tend to be robust in system (b), and vice versa, and that system (a) and system (b) are likely to carve a space differently, so that the alignment provides insight into their structure. This isn’t to say that there’s nothing but syntax going on, but rather that each system is autonomously ‘meaningful’–really, I mean to include something much like Miriam Bassok’s great ideas about semantic alignments, and certainly something like my own perceptual alignment account (ungated). However, to be rigorous the bindings between formal systems are usually required to be articulated syntactically.

Poincare, in his famous fight with David Hilbert, actually got the lack of semantics issue more-or-less right: Poincare argues that one cannot do axiomatic geometry, because geometric imaginings are *about* geometry, while axiomatic imaginings are* about* formal notations. He resisted the processes of alignment and mutual inference which became, like it or not, the core characteristic of 20th century mathematics. But he was right that the relationship of a formalism to a geometric or trigonometric structure is not one of reference.

This doesn’t mean that mathematical systems aren’t *meaningful*. They can be meaningful or meaningless in all kinds of ways, both through their internal structure, and in the kinds of modeling relationships they bear. It’s just that little of this meaning is particularly referential in character, and its shared by geometric and formal systems.

There’s one last promissory note I have to pay, then I’m done: Why is the unit circle so good, and formalisms so bad? In other words, we all agree that playing with symbols is often error-prone, confusing, and has a feeling of meaninglessness not shared by the unit circle and the number line. What gives, if not a nice syntax-semantics distinction?

**What gives, and what takes**

Once we agree that formalisms and mathematical diagrams are alike in *type*, we can still see that they are very different in *emphasis*. Here’s a characterization of what makes a mathematical structure good for grounding:

- It requires
*minimal regimentation*—the training process required to play the appropriate math game pretty robustly is relatively lightweight. This is what Frankenstein*ought*to say instead of saying that the unit circle is a real-world object, or given by experience, or whatever. - It is
*stable in memory*. That is, not many things have to be remembered to get it right, and those things are unlikely to get mixed up.^{4}Relatively speaking, formalisms tend to involve lots of relatively confusable items, and therefore to be pretty bad groundings. - It is
*generative*: Many truths are easily extracted from it, and those truths are important for the system that is to be grounded. In the unit circle, the things that can be easily inferred or ‘read off’ from the circle are just those identities most important for trigonometry. Ditto for number lines and everyday arithmetic and magnitude understanding. - This one is probably actually incidental, but worth mentioning: It is richly interconnected with other knowledge. Rich interconnections can reduce errors, increase stability, and help with reasoning generatively. However, if you play with the unit circle for a while, and try wandering ‘off the beaten path’, you’ll quickly realize how many things you
*don’t*know about it. It’s not that you have rich knowledge about that shape—it’s that you know which thoughts to think, and which to avoid.

This preliminary characterization is better than the ‘grounded in experience’ one, because it provides a clear operationalization, respects that *no* mathematical structure is grounded in real experience, and allows that occasional formalism—y=mx+b, a^2+b^2=c^2, AB=BA, (a->b)^(b->c) -> (a->c), ~~a=a and so on, can itself carry that feeling of familiarity and concreteness we normally associate with ‘grounding’. Finally, it explains by common shapes tend to be better than formalisms for grounding: mathematical structures that resemble regular shapes are likely to be stable in memory and to require minimal regimentation.

**Conclusion**

Formalisms are not referential, and mathematical structures like the unit circle are not ‘experiential’. Mathematical structures all involve regimentation of perceptual systems to align them with rigorous culturally constrained operations, and in this way all are alike. In each, ‘meaning’ is contained in the autonomous web of permitted transformations. There is no ‘semantics’ or ‘meaning’ in mathematical reasoning with formalisms, but rather a conceptually symmetric relationship of inference-preserving syntax-syntax alignments among intrinsically meaningful systems. Nevertheless, formalisms tend to more often be used to derive truths about other systems, especially at low levels of mathematical sophistication. Potent structures for mathematical grounding require minimal regimentation, are stable in memory, and are generative. These tend to be geometric, social, or graphical, rather than formal, because formal systems require extensive regimentation and high working memory load.

Steven Phillips wrote a few comments, and kindly agreed to let me post them here. He says:

*I think we share the similar misgivings about the Fregean perspective of treating the meaning of a statement as derived from the meaning of its constituents (elements) and their inter-relations. I think the basic problem with this approach is that it depends too much on having an appropriate meaning for the constituents. *

*Category theory may help in this regard where the "semantics" are based on the relations (equations) between arrows between objects, not specific objects, nor even specific arrows. In this way, meaning is not required to be "grounded" by reference to specific objects or elements.*

*As a "concrete" example, the usual notion of a group is a set that has an identity element, an inverse for each element, and a binary operation, satisfying some relations among the operations and elements. From a Fregean perspective, the meaning of a group depends on the meaning of the elements of the set. The first step in abstracting away from specific elements is to recast each part as a function: the identity (unit) element u is the (nullary) function that picks out the element u, each inverse is obtained from a unary function, and the binary operation is a binary function. The next step is to capture the axioms of a group, e.g., e x a = a = a x e for every a in set S, as equations among the functions, effectively creating a "point free" version of the axioms, i.e., one which does not make explicit reference to the elements. At this point we have a group in the category Set (of sets and functions). Here, we note that the only properties of Set that are needed for this construction are finite (Cartesian) products (for reasons I didn’t explain). Thus, we can further abstract away from objects that are sets and arrows that are functions to any (abstract) category with finite products. In fact, we just need three abstract objects, which we can be label 0, 1, and 2, and three abstract arrows, which we can label u (unit), i (inverse) and m ("multiplication") that satisfy the relevant equations (commutativity diagrams). This category is our algebraic "theory" of groups, and the functors from this category to Set are the (set-valued) models of the theory, i.e., the (set-valued) functorial semantics of the theory. Other functors, such as those into Top (the category of topological spaces and continuous functions) provide for topological groups (topological semantics), and so on. *

*Regarding the grounding of mathematics generally, I’m a bit sceptical that its (solely) grounded in geometry/quantity, since that seems to leave out topology, which is big chunk of mathematics. Topology is what you do when you don’t have a notion of distance, or quantity. Perhaps intuitions about geometry, etc., or their failures, motivated the invention of topology, but that is different from saying that topology is grounded in geometry.*

*In general, I would be concerned about overly relying on geometrical intuitions as a grounding of mathematical concepts. For example, the concept of a functor between categories can be thought of, geometrically, as a map from a circle enclosing a bunch of arrows connecting points (category) to another circle enclosing another bunch of arrows connecting points (category). The image of a functor can then be thought of as a smaller circle enclosing a subset of point and arrows within the larger circle. So, is the image of a functor a category? Geometric intuition suggests yes, since the smaller circle is just another arrow-enclosing circle. In fact, the image of a functor is not necessarily a category, which can be confirmed by trying to generate a counter example that satisfies the axioms of a functor, but not the axioms of a category. (An exercise for the reader! Here, although, geometric intuition plays a (necessary) role, geometry is not sufficient, since you need to know what are the axioms, which are given symbolically. *

*So, why is the unit circle such a good model of trigonometry? I guess its because it has lots of extra structure to play around with, than the symbolic model. Why is it not a good model? I guess its because some of that extra structure is not in the right correspondence relationship. Note, that by correspondence relationship, category theory is not restricted to isomorphism, which appears to be the concept that predominates in cognitive science. Rather, for category theorists, adjunctions are far more important.*

*In short, I suppose mathematics is grounded both geometrically and linguistically, and I guess that category theory should be well-placed to see the connection.*

I replied that my intuition is that we *do* rely on spatial/geometric reasoning as a *mechanism* implementing lots of mathematical thinking, but that geometric/intuitions are not *determinative of truth*. In Steve’s example, we are misled precisely because geometric intuitions are our usual first-line mode of thinking. However, in category theory (as in lots of mathematics), we accept as true only things that conform to careful and precise coordinations among different intuition systems. In this case, there are two ways to get to the error. First, if you use the usual sorts of (spatial!) pattern-matching processes across axiomatic systems, you come to a conclusion that differs from the geometric intuition. This isn’t totally determinative either. You have now to align your arrows and your proof, to make sure everything ‘went right’. I’ll also note that geometrically, if you draw a circle inside the image category, you actually *can *draw a circle who’s contents aren’t a category (by, say, cutting the circle through some of the arrows). So geometric ‘intuitions’ are trainable and tunable, and I think a lot of this goes on too.

The point is that we have to have multiple descriptions: one that captures processes or mechanisms of reasoning, and another that lets us in-principle agree on how truth is supposed to be assigned. I guess I bet these aren’t always that related.

- Because the word ‘symbol’ is over-subscribed in this context, I’m going to refer to good old fashioned symbols like sin(x), AxB=BxA, and y=mx+b as “formalizations”. This puts the cart before the horse, because, as the saying goes, you can’t say FORMalization without saying “FORM”. I’ll argue that these formalisms, like other mathematical structures, acquires intrinsic meaning through a combination of cultural training and its, you know, physical form. but for now, if you’d like to pretend that ‘form’ means something that doesn’t include the, you know,
*form*of the symbol, you’ll be in good company. Most classical symbolists think so too. ↩ - Tyler Marghetis likes the word ‘regimented’, and I think it’s an excellent one. I’ll steal it without further attribution. ↩
- Here’s a quick reminder, which leaves out some important details. The goal of diagonalization is to prove that the set of real numbers—say those between 0 and 1—cannot be listed (this is the same as putting them in 1-1 correspondence with the naturals). The proof is a proof by contradiction. One imagines a book which lists, in some order, infinitely many reals. Can that book contain all the reals? It cannot. Let’s say the book started like this:
0 | 0 . 1 0 1 0 1 1 1 1 0 1 0 0 1 0 1 | 0 . 0 0 0 0 0 1 1 0 1 0 1 0 0 1 2 | 0 . 1 0 0 1 1 1 1 1 1 1 1 1 0 0 ...

Where I’ve made things easy by doing it in binary. Now, we make—make with our minds’ hands, as a temporally extended act of visuo-spatial imagery, not to put to fine a point on it– a

*new*real number which is not in our list. Call it X. What we do is this. Start with 0, followed by a decimal. Then make the first digit different from the first digit of the listed numbers:X = 0 . 0 ?

Now, whatever ? is, X cannot be the first item in our book. Make the second digit mismatch the second item in book:

X = 0 . 0 1 ?

Now X cannot be numbers 0 or 1. Do the same for the next item

X = 0 . 0 1 1 ?

And x can’t be items 0, 1, or 2. Keep doing this

*spatially extended visual operation on a supposedly formal representation*(subtle, aren’t I?) and you’ll*see*that X cannot appear anywhere in the book—it mismatches each item. ↩ - For instance, exact spatial positions are hard to keep in memory, while spatial relations are relatively easy. Familiar frequent things are easier to recall than unusual and complex ones. The unit circle just requires a circle, a cross, and a line—three of the most common and simplest shapes of mathematics. Then one must only remember where the line goes, where 0 degrees is, which is sine and which cosine, that tangent is rise over run, and that the degrees go counterclockwise. This may seem like a large number of items, but compare it to the number of things you have to remember to keep track of the dozen or so trigonemtric relations embedded easily in the figure! ↩

Scientists have done a spectacularly poor job explaining to the taxpayers what we do in many ways. One, which is perhaps not entirely our fault, is that we have done a poor job explaining just how cheap our research is. Here I tell you about a project my lab conducted, which suggests (a) that people vary dramatically in how they map the cost of objectively small budget items onto a number line, even when given numerical information about costs, and (b) that support for these budget items is elastic in terms of psychological relative cost—people who are better at mapping the true cost of the programs into number lines view them more favorably than those who don’t them.

It’s that time of year again. House Republicans have noticed that the National Science Foundation still exists, and have once again demanded that science research—and social science in particular—be cut substantially. It’s actually not as bad this time around as in some past years: social science is facing a 42% proposed cut; in past years, the starting proposal has been even higher. The proposal also puts heavy restrictions on climate change research.

And, once again, it’s time to face the fact that we scientists have done a spectacularly bad job explaining what we do, and why it is worth public investment. Some of the reason for our failing is perhaps that we scientists feel entitled to do our work; some is that objectively, science is an amazingly good investment, and social science has arguably led to growth in GDP, as well as in outcomes for veterans, escape plans in the face of natural disasters, and educational practice.

Nevertheless, support for public research is relatively low, and funding for the public universities that are the major site for this research is under pressure. One problem is that there is a widespread misconception that professors spend the majority of their effort teaching in classrooms. Of course, teaching is an important part of our job, but classroom-related teaching is about 20-30% of most faculty member’s efforts. The bulk of our time is doing research—research that creates much of the new knowledge we go on to teach in our courses. As a result, students end up paying for research that benefits the entire tax base, and taxpayers don’t realize how this value is achieved.

But over the last few years my lab has been researching another likely cause of opposition to the NSF and other research budgets*. Budgets for NIH, NSF, IES, DARPA, and other large, famous federal research foundations are typically expressed as numbers. For instance, the NSF budget is about $7 billion dollars annually. And **people don’t know how much that is. Worse, they work with those numbers incorrectly, and when they do, they tend to end up making predictable bad judgments that likely mismatch their real desires.**

I give a super-fast overview of our methods here. There are a lot more details in the published papers. **If what you want is less detail, here’s the one-sentence version: About 40% of people are biased on number lines such that they systematically and hugely overestimate the value of smaller ‘big’ numbers relative to much larger ones, when those numbers cross between millions, billions, and trillions**

The major—but not the only—way we have examined large number use is using the *number to position* task. Here, we ask people to put a number on a number line. For instance, we might ask people to put 280 million on a line from 1 thousand (or 0) to 1 billion. There is quite a bit of complex structure in how people respond to this task, and I won’t explain it all in detail (but see our papers). The short version is that people divide the line up into ‘chunks’ based on the scale word used—for instance a line from 1 thousand to 1 billion would be divided into a ’thousands’ chunk and a ‘millions’ chunk**, like this:

The thing is, the way I just drew this, it’s very wrong. You see, there are 1,000 millions in a billion (that’s what a billion is, right? 1,000 million, at least here in the US (link)). But about 40% of our subjects do something quite like this, placing "million" somewhere between 20% and 50% of the way across the line. The other half also seems to divide the line up, but they do it more or less at the right place, which is about here:

So about half of people not only get big numbers wrong, but get them *systematically grossly wrong*. Does that matter? It might: if these behaviors reflect something that is happening when we compares costs. Let’s look at how this might work in an example: the $11 million that was budgeted at one point in 2013 for political science research, out of the $7 billion total the NSF was getting that year.

If you’re one of the more accurate, linear responders, that looks (on the line!) like not that much money. But if you’re one of the non-linear people, then $11 million looks

like *a lot*. I collected data from 50 mechanical turkers, to verify this. First I asked them to place 11 million on a line from 0 to 7 billion. (the NSF was not mentioned). Then I gave them 8 other number line judgments on our standard thousand to billion" line. I used the latter 8 judgments to bin people into the two groups, which I’ll start calling *linear* (that’s the people who get it right) and *categorical*. The difference is large and right on track with our model predictions. Categorical responders rate $11 million as 20% of $7 billion, while linear responders are closer to 1%.

We don’t know yet whether number line judgments actually causally impact people’s political views. But we have some evidence that they might at least correlate with them***. Last summer Brian Guay conducted a research study in my lab, through Time-Sharing Experiments in Social Science (or *TESS*) . TESS conducts nationally representative online surveys using standard polling methods on important topics for social scientists, and that’s just what they did for us. The survey consisted of about 2100 adults.

Here’s what we did: First, we gave each person 4 number line judgments, and used those to divide them into two groups. Then we asked people to make 4 judgments about the federal budget ****. In each, we gave people a total budget for an entity, and an amount allocated to some particular program in that budget. These were actual spending figures that had been recently reported in the media. Then we asked whether the agent should spend “a lot less”, “a little less”, “about the same”, “a little more” or “a lot more” on that particular program.

The four items were: spending on climate change research in the NSF ($133.53 million of a $5 billion NSF research budget); spending on weapons systems by the federal government ($114.9 billion of a $3.45 trillion federal budget); spending on unmanned drones by the U.S. Customs & Border Protection agency ($88.6 million of a $10.35 billion CBP budget), and US federal government foreign aid ($52 billion of a $3.45 trillion federal budget, and fairly notorious.

Obviously the details depend on exactly how you measure things. We had decided to add together***** the numerically coded ratings to get a ‘total support measure’, because that seemed simple, and also to analyze separate effects for each question, because that seemed interesting. We included only people who answered all the questions. The graphs present something slightly easier on the eyes, but basically tell the same story. What they indicate is that, overall, acting linearly on the number line task was associated with a shift support for maintaining or increasing funding for these government programs, i.e., who gave a response of at least “about the same". The total raw shift in support was about 4 percentage points, from 59% supporting these programs among linear responders on average to 55% among categorical (standard error around 0.9%).

Of course, some of that is explained by correlations between the groups: accurate number line responding was moderately correlated with income, education, and gender. However, even when these were included as covariates in a multiple regression, linearity continued to carry unique variance; perhaps more importantly, a preliminary SEM analysis suggests that linearity is affected by overall education level, but also mediates education’s effect on these judgments. There are lots of ways that education probably influences support for cheap government programs, of course; however, our mechanical turk studies suggest a possible causal intervention—training people on the number line affected immediately posterior number line judgments.

Nor does political affiliation explain easily the results: more linear people take a more liberal position by supporting increased NSF spending on climate research, but a more conservative one, approving more spending on drones to secure the US border.

If you want the full details, here’s the same graph as above, broken down by question. You can see that support for climate change research and spending on drones are much more sensitive to these phenomena than foreign aid and weapons—is that a real difference? I don’t know. It would be interesting to see how elastic people are to the programs, but until these patterns are better replicated****** we won’t really know for sure.

The main moral is this: Giving people context to help them understand the significance of large numbers *may *lead a fairly large proportion of them to misinterpret the relative values involved in predictable ways. Practically, this matters, because contextualizing information is often used by the media to frame values and often crosses scales in just this way. It’s important for people to realize that, even when it *doesn’t* shift their position all that much, it may have a larger impact on how they interpret these statements. Saying that the NSF spends $11 million of their $7 billion budget on political science******* may sound either of two very different ways, depending on how the reader interprets the numbers.

This failure to correctly deal with large numbers impacts our support for cheap programs, but as my friend John Opfer points out, it also plausibly impairs our ability to cut deficits appropriately. Politicians often propose budget cuts which are objectively tiny—but are probably accepted as moderate progress by a fairly large proportion of the population. Again, we think it’s important to carefully express numerical in a context in a way that avoids these typical misinterpretations.

The second moral is more fraught, but relates to the question of how we *should* present large numbers. We don’t know—we don’t have all the right data to determine what methods of presentation will be most effective. Here are some guesses, though, some of which are informed by data. 1) Present all your numbers using the same base. That is, **don’t** say “the proposal cuts $300 million in climate research, from $1.4 billion to $1.1 billion. **Do** say “The proposal cuts $300 million in climate research, from $1,400 million to $1,100 million” (as in xkcd). **Do** present linear visualizations of your quantities. **Do** remind people of how the number system works, *every time*. Do give percentages where meaningful and possible. Fanny Chevalier has collected a large number of scale representations, and done some interesting analysis of the kinds of scales people use. You might find her analyses helpful too.

* This research was partially funded by the NSF. We weren’t NSF-funded (and I never have been yet), but TESS, the group who funded and conducted our survey, is funded by the NSF.

** Before you ask, it doesn’t seem to matter much whether numbers are printed as numerals (e.g., 280,000,000) or hybrid number words (e.g., 280 million).

*** Full disclosure: this data has not yet been published in a peer-reviewed journal, or even presented at a peer-reviewed conference. (For that, we’ll do structural equation modeling, so the analysis won’t even be the same). You heard it here first. Lots of things you hear on the internet turn out to be wrong. Caveat emptor.

**** We were unable to counterbalance the order of the number line and political judgments in this experiment, though the internal scales were presented in random order. There is, of course, some possibility that mere exposure to the number lines changed people’s views. No study is final. As I said in ***, caveat emptor.

***** This treats the Likert scale as a fully metric scale, which is inappropriate. Better techniques exist, and our results generalize to most. But they are harder to describe, so here I’m sticking with the simple.

****** In the lab, as part of piloting these materials, we *have* replicated an effect of linearity on NSF results three times with mechanical turk populations during pilot work—that’s three out of three attempts. In each, we also included Foreign Aid spending, and as I recall in two there was a significant effect, but not the third. These are NOT preregistered trials, and mix those intended as exploratory and those intended as confirmatory. As I said, more care is needed.

******* Just to fully connect the dots here, *this research was itself funded partially by NSF funding to TESS, a social sciences project!* I wouldn’t call that a conflict of interest, necessarily (*I* am not on the TESS grant, nor have I received any federal dollars through NSF, for any project—though heaven knows I’ve tried!), and I’m not claiming that these data by themselves, say, demonstrate the intrinsic valuableness of public funding for research. However, if you are inclined to see self-interest in this research line, well, I can only state that that wasn’t my conscious motivation, and that I want to be clear and up-front with my readers about the concerns that they might have. Nobody is free from implicit biases, and I want you to be able to scour my behavior for it.

]]>