I was just reminded in a talk that people (including me) often fail to apply the modus tollens, i.e., they fail to infer that an antecedent is false given that the corresponding consequent is false. Here is an example:
If there is a circle, there is also a triangle. There is no triangle. Can you say anything about whether there is a circle?
According to the rules of propositional logic (modus tollens) you can infer the absence of the circle from the absence of the triangle. A few people, perhaps those who are not extensively trained in logic, tend to miss that. This effect is actually known for a long time (see Wason selection task for a different variant).
The example above made me think about what the effect means for how people reason about their environment. These experiments show that some people readily associate two things, but are wary of drawing conclusions from the absence of one of them. Those people, therefore, follow good statistical practice: absence of evidence is not evidence of absence. The point becomes clear with the help of Wikipedia’s example:
A baker never failed to put finished pies on her windowsill, so if there is no pie on the windowsill, then no finished pies exist?
We have learnt repeatedly in our lives that we cannot draw such a conclusion with certainty, because there may always be events which interfere with the process of putting pies in windows. For example, the baker may have had to leave the bakery due to an emergency after having finished the pies, but before putting them in the windowsill. It, hence, seems, in my opinion, that we are unconsciously aware that most associations we make are correlational and not causal. So we only apply the modus tollens, when we are sufficiently certain that the association we learnt is causal:
This glass is so fragile, if someone drops it, it will break. Later you see that the glass is not broken. Can you say something about whether someone dropped it?
Application of modus tollens is simple here (at least it is intuitive for me), because of our extensive experience with glasses and our acquired understanding that there is a causal relationship between dropping and a broken glass. I will argue now that the difference between application of modus tollens in the glass example and non-application of modus tollens in the bakery example is due to an acquired preference of people to accept a relation as causal, when the effect immediately follows its cause.
In my view, one of the biggest achievements of mankind is finding reliable relations between an increasing number of events or entities in the world. We use these relations to predict outcomes which are the basis for achieving our aims by executing appropriate actions. We usually call finding these relationships ‘learning’ and we typically learn by observing events which happen before us.
The most reliable relations we can learn about are causal relations. It is actually not easy to define Causality formally, but intuitively one could say: an event A causes an event B, iff B always follows A. The only difference to the if-then relationship (implication) of propositional logic then is the additional temporal aspect. However, it is tremendously hard to figure out whether a relation in the real world is truly causal, or whether it could be broken by an interfering event C such that we then know that B does not always follow A. It is clear that it is pretty much impossible to verify that B really follows A in all (natural) conditions. So we are left with some uncertainty about whether A causes B. In particular, we know from experience that when there is a long time between events A and B, their relation tends to be brittle in the sense that many events could interfere with it (bakery example). On the other hand, if B immediately follows A, there is very little time for other events to interfere and we can be more certain about the relation between A and B (glass example). I believe, like many other neuroscientists [1-4], that we (our brains) unconsciously represent the uncertainties over learnt relations, at least approximately.
Therefore, when we are reluctant to apply the modus tollens, it is because we do not associate sufficient certainty with the suggested relation. This means that we often are quite sceptical (believe that the relation is uncertain), when we get told:
If there is a circle, there is also a triangle.
In the sceptical interpretation of that sentence we apparently think rather of a single example than of a material relation. Hence, could we make it stronger by making it more explicit that this relation is supposed to be universally true? Judge by yourself:
All circles occur together with triangles. There is no triangle. Can you say anything about whether there is a circle?
I argued that when people do not apply modus tollens, it is because they unconsciously understand the presented relation as a statistical relation which is uncertain. From my point of view, this makes a lot of sense, because the brain appears to routinely represent and process uncertain concepts [1-4]. I would also argue that people who do not apply modus tollens do not have a deficit in logical inference, because most of them will show application of modus tollens, when confronted with an appropriately phrased explanation and question. They just do not translate the used language into a representation of a certain relation. Regular students of maths and logic, on the other hand, have learnt to interpret corresponding language with certainty.
My argument here is based on the intuitive understanding of the example sentences driven by my own intuitions. Perhaps you can divide people in ‘statistically minded’ and ‘logically minded’ with respect to the effect in question, but I’m tempted to believe that the statistical mindset is the more common, natural one, because the brain has to cope with uncertainty on several levels anyway, and that the logical mindset is acquired on top of the statistical one. You may well have different intuitions about the above examples and I really wonder whether proportions of modus tollens applications across people follow mine. Somebody should do an experiment …
PS: It might be that I have just reformulated the ideas of Oaksford & Chater (see ch. 5 of  for a formalisation, experiments and discussion), but for now I have spent already too much time here to check this thoroughly.
 Knill, D. C. & Richards, W. (ed.) Perception as Bayesian Inference. Cambridge University Press, 1996, Google Books
 Doya, K.; Ishii, S.; Pouget, A. & Rao, R. P. N. (ed.) Bayesian Brain. MIT Press, 2006, Google Books
 Chater, N. & Oaksford, M. (ed.) The Probabilistic Mind: Prospects for Bayesian Cognitive Science. Oxford University Press, 2008, Google Books
 Trommershäuser, J.; Körding, K. & Landy, M. S. (ed.) Sensory Cue Integration. Oxford University Press, 2011, Google Books