Logical Fallacy #18: Special Pleading

When a faulty argument is noted and objected to, the insertion of fallacious arguments to shore up the falling argument or the subtraction of counter evidence is known as Special Pleading. This is known by several different names – Ad-hoc reasoning, Stacking the Deck, Ignoring the Counter Evidence, Slanting and the One Sided Assessment. It can be seen also by it’s reverse – The God of the Gaps.

There are three variants to this logical fallacy

Additive Special Pleading, or Ad-hoc Reasoning

Ad-hoc literally translates from Latin to English as “and this”, but is used to say “tack this on as well”. These additional explanations are tacked on to the original explanation as an afterthought add on to fix the fault in the original infrastructure of the argument. It fails.

Example –

“Clairvoyance has been demonstrated with large audiences, where the psychic was able to demonstrate knowing things they couldn’t possibly know”

‘Yet when we test them in the laboratory, they score about the same as a random guess’

“That is because the laboratory environment is counter conducive to psychic phenomenon.”

The last statement adds a spurious explanation as to why the counter evidence is faulty. It generally has a flaw in the logic, evidence or makes an untestable claim.

Subtractive Special Pleading

Stacking the Deck, Ignoring the Counter Evidence, Slanting and the One Sided Argument quite literally refers to ignoring evidence that counters the argument made, dismissing vital evidence rather than addressing it. In essence, subtracting from the discussion.

Example 1 –

“All the apples in my orchard are red”

‘what about the green apple tree over there?’

“Ignore that tree, because all of the apples in my orchard are red”.

Example 2 –

“Homeopathy clearly works because all of the testimonies telling us it is so”

‘Yet all of the double blind scientific tests have shown no effect beyond that which can be explained by a placebo – which is to say it is the same as taking nothing’

“Those scientists have an agenda, but here, read these testimonies”

The last statement ignores the additional evidence, dismissing it out of hand and refers back to the first statement as if the objection had never happened, subtracting a point that should be addressed, but isn’t.

The God of the Gaps

This is an argument style that requires further and further details to be provided such that there is no way to provide evidence of each gap in the pattern. I call this a reverse of the Special Pleading because it asks the scientist to provide add on evidence to make the rejected argument better – especially when the argument doesn’t need it. The God of the Gaps technique is to make the argument look bad by asking for evidence that is not needed, and then dismissing the argument when the evidence is not located, or the arguer attempts to educate the individual on how the scientific method works.

Electromagnetic radiation is an excellent example of a spectrum. Look at the bit between red and orange light, and you find a frequency of light all on it’s own (orangey red). Halve the distance between that and your “red” light from earlier, and you will find a frequency between the two (reddish orange). Keep going and you will never stop until you reach the limit of your equipment – but never the limit of the spectrum.

Scientific evidence, if I can use such a misleading term, is often not discovering a perfect spectrum like this. It is finding enough data points that a pattern can be inferred. This inferred pattern is then used to make prediction, which are tested for and if confirmed, adds a level of validity to the inferred pattern, which can be represented as a line or spectrum. Enough confirmation and the pattern is assumed correct until sufficient evidence is found that indicates the pattern is faulty and the spectrum is actually somewhere else. This mechanism has shifted our notions of cosmology from flat Earth, to Terra Centric, to Helio Centric, to Galactic to the Big Bang and from Stuff Falls ya know, to Newtonian Physics, to Einstein’s Relativity and who knows what is next. The new evidence doesn’t delete the previous knowledge, it adds to it, even when the old knowledge was specifically wrong, but generally right.

The God of the Gaps is an attempt to force a scientist to create Special Pleading for the gaps in the evidence that complete the spectrum. If the scientist fails to do so, then the arguer says “aha – it can’t be what you describe, so it must be god/magic/aliens!”

Mostly what this indicates is that the arguer does not understand the principle being debated, or the methods of science. Unfortunately people who tend to use this fallacious technique do not want to learn the methods of science, or the complexity behind the principle, they just want to justify their ignorance and false belief.

A poorly created idea should be tested to see if it stands up. When an idea evolves into a Theory, or is now commonly recognised as a de facto truth by scientists in different fields, it is no longer an idea that should be “tested” by this methodology, especially by the lay person. There is quite an arrogance to the person who says “I have read a bit on the internet about this, and I think thousands of scientists, specialists and researches along with all of their tests are wrong”. Now if that person is a specialist in the field and finds an exception to the Theory and tests for it, publish away, raise a great cry and hue, for others will also want to test your findings and if it is right, you may be the author that updates the entire field.

At some point, I will write more about this God of the Gaps and expand it.

Logical Fallacy #17: The Slippery Slope

Image of Pierre-Simon Laplace
Pierre-Simon Laplace – considered the father of deterministic science

When a chain of logic leads to an extreme scenario, it is referred to a slippery slope, such as slowly tipping an object off the edge of a plateau and it will slide down to the bottom under its own power. Frequently the chain of logic is tenuous and the outcome undesirable, leading fear to the initial step or steps such that it is or they are not taken. When the chain of logic is not tenuous it is a useful tool, but a tool that is rarely used.

When used erroneously, the slippery slope logic becomes a fallacy simply because the chain of logic between the first action to the last action does not hold up. When used correctly, the slippery slope argument can be used to demonstrate reasonable dire consequences, but only a few steps along. If the number of steps is too great (perhaps 5 or more), then the complexity of each interacting event becomes too great to give any reliability to the outcome. This is generally the error introduced by the slippery slope logical fallacy.

Additionally the outcome is generally exaggerated and dire. This evokes a fear or repulsion emotion in the recipient of the argument, distorting their value system when it comes to evaluating the validity of each step in the slippery slope.

There are two main forms of the slippery slope logical fallacy, implied and explicit. With the implied slippery slope logical fallacy, the steps between the initial event and the dire outcome is implied and not specifically specified. For example, drinking alcohol leads you to become an alcoholic and therefore you will kill yourself and your friends in a horrible drunken driving accident. In some instances this may even turn out to be correct, yet in the vast majority of situations, not only do people not become alcoholics (in the medical sense), but also relatively few drunk people are involved in horrific car accidents, especially that kill their friends. The dire consequences evoke a fear of the outcome, prompting you to overvalue the tenuous chain of logic leading to this outcome.

The explicit version of this logical fallacy lists all of the necessary steps in the chain, and each step can seem feasible, even the outcome can seem feasible, but the slippery slope mechanism overstates the likelihood of the outcome. In a mechanistic universe a long chain of actions and reactions can indeed lead to a predictable outcome (Laplaze Machine, or Demon), but in a chaotic probability driven universe (like ours), the next steps become less and less predictable. An example of a highly predictable mechanistic model can be demonstrated by use of a gravity shoot releasing a billiard ball at a set angle and speed onto a billiard table such that the ball bounces off 3 walls and goes into a pocket. Works every time. Yet if we organise for 20 or 30 bounces, even this model begins to break down. More realistically speaking, crumple a can and place it on a table. Slowly push that can off the table until it falls. Now mark the place where it ends up (stops moving). Put the can back on the table and slowly push it off again. It won’t land where the last one did. Heck, it won’t even tip off the table the same way.

When dealing with life forms, the can example above is far more like reality than the billiard ball example. To translate choice into the first billiard ball example, each step (bounce of the ball on the billiard table) has a number of angles it can now bounce off into depending on the chooser, which will change each progressive bounce such that the final pocket (if any) the billiard ball ends up in is far different to the preceding and next one.

If chains of events are so tenuous, how does science allow us to make predictions at all? Shouldn’t we just give up? Doesn’t that just debunk the whole point of everything?

No. This is why – the study of nature is far more probabilistic than not. In general, things happen roughly the same, yet each event is different. Consider our can example above. The exact tipping point depends on a number of factors – the orientation of the crumpled can, the speed at which it is pushed, wind currents, the temperature of the can, the surface, the air and so on. If we know this, we then need to contend with the air currents on the way down, the rotational spin imparted on the crumpled can as it falls off the edge and so on. Eventually it will strike a spot on the ground and bounce a few times. Where it strikes the ground, how it strikes the ground, velocity of the can, angular momentum of the can and the type of material of the ground will all play a part in defining where and how the can will bounce. Each successive bounce will have a similar set of calculations. If we know enough… we can actually mostly work out where the can will land, much like the billiard balls. Yet each push off the edge will be a new problem – a unique problem. The repeated problem, though, has predictable components. The can will slide on the table, it will fall, it will land and bounce, it will end up somewhere.

So lets do this experiment 100 times. We will discover that the final location of the crumpled can has a high value close to the landing zone, petering out to a low value of locations far away from the landing zone (by value, we mean probability of landing). There will be a boundary of maximum landing that the crumpled can will not go past, but it will be sparsely populated with landings, while the closer in section will have a higher population of landings. The initial impact zone will have a smaller version of the final landing zone. The fall location at the edge of the table will have a similarly smaller location. Each step from the push to the final resting location becomes less predictable, but still has a level of prediction. On average, the location can be predicted, the likely outcome known. If we changed the table and floor to a set of stairs, the complexity goes up, but we can still work out the area that the crumpled can will rest in rather than the specific location. That area is highly predictable, even though the stairs may be 100 steps long. Often in science, the area of find rest is the outcome we are after, rather than the specific location this time.

If we introduce people walking up the stairs, we can’t easily predict if a person will step on the can, kick it, pick it up and so on. The variables have made it too hard to predict.

Using the knowledge of Newtonian Physics (which are superseded by Einstein’s laws, but are still pretty much good enough for nearby astronomy) satellites are launched from Earth, spun around gravity wells (such as moons and planets) and whizzed off with high precision all over the solar system. The calculations for this are on the one hand monstrous, yet also on the other hand elegant. Also most satellites have a fudge factor built into them – a slight miss can be realigned en-route and corrected for.

In your computers central processing unit (CPU) is error correction, so when the high speed electrons misbehave and do some of that quantum stuff, the non-average result is detected and adjusted. This happens remarkably fast, all things considered allowing you to read this from the web. If these weren’t corrected for, we would be back in the mechanical calculator days. We only need this error correction because of the precision we are demanding from our electronics. Simpler electronics don’t need error correction, because they rely less on the accuracy of the result.

One may think, at this point, that the slippery slope kind of seems reasonable. If the arguer were to take enough factors into consideration in their prediction of results, then they would be. The problem is the arguer generally hasn’t, so they are making value laden decisions about the next step which are faulty. It is like predicting that the crumpled can will land at a particular extreme position. It may do so once, but the odds of a repeat are very unlikely, yet the arguer is suggesting that this will happen every time, or at least the consequences are so high that we can’t risk pushing the can off the edge of the table. That is the error – that the odds are overstated because the feared consequences are so extreme.

Logical Fallacy #16: Reductio ad absurdum

This fallacy translates from Latin to “reduction to absurdity”, but comes from the Greek “eis atopon apagoge” which means “reduction to the impossible”. If you reduce an argument too far, it becomes absurd, or blatantly wrong. Clearly any argument can be rendered in a similar light, so using this technique to demonstrate that an argument is true or false is illogical. If an argument has an absurd conclusion to prove the premise, the odds are the arguer has reduced the argument to absurdity.

There are three flavours of this logical fallacy.

– Undeniable:

Consider these two arguments:

* If the anvil had no weight, it would rise up float away.

* The Bible is the word of God, so it cannot be corrupted by man.

The first argument seems reasonable as a massless anvil would be buoyant and would potentially float. As it does not do so the assertion of mass in the anvil must be true.

The second argument also seems reasonable, if the Christian Bible is the word of the Christian God, then it cannot be corrupted by man. Yet closer examination will show that this statement is actually quite in error. Firstly, there is no evidence that the bible is the word of god, especially given all of the varying versions, interpretations, errors and contradictions found in the bible. This does not stop this argument being used though.

– Untenable result:

Consider these two arguments

* Without rules, society would become chaos

* Without religion, humans would have no morality

Both arguments seems similar and perhaps true. A society without rules would seem quite chaotic, yet when instances of this have occurred, the chaos is often brief before some kind of rule set becomes imposed by the people themselves. Even in the chaos, some sets of rules can be found. While not necessarily true in all cases, the first argument seems reasonable as a general rule of thumb.

The second argument appears to be basically the same thing, yet morality has been demonstrated to be irrelevant to religion and belief systems. Examples of theists and a-theists are available demonstrating both moral and amoral behaviour.

– Proof by contradiction:

Consider this statement

There is no smallest positive rational number, because if there were, it could be divided by two to get a smaller one (taken from the Wikipedia example)

This argument relies on the inability to contradict the premise. If you can, contradict the argument, then the argument is false. The problem is this relies on everything being true or false and negates fuzzy logic and alternate measuring systems. When does a table become a stool, or a stool a table? Does it have to be one or the other, is hybridisation possible, or does function define the form?

Common Examples from Both Sides

Here are two common absurd examples created by over reduction.

Evolution

Evolution is quite a large and complex concept. It is frequently reduced to a digestible level to give the basic idea to the lay person. Pretty much every lay person gets that basic idea:

A while ago humans came from apes, a long time ago we came from slime. Life changes. Changes that add strength survive, the others die out

It’s a nice, tidy concept. Of course, the above is ridiculously simplified. People spend their entire careers studying and refining this concept in many different fields of study.

The fallacy comes in when this reduced concept is then used to attack and defend the complex idea. At this point, the reduced concept become an absurd argument.

Climate Change or Global Warming

Like Evolution, Global Warming (which is a more accurate description) is a big and complicated concept that is simplified for the lay person. Most lay people get this basic description:

The gradual additional energy the Earth is retaining due to additional anthropogenic (man caused) and natural phenomena increasing the so called “Green House Gases” (blanket gases), often simplified to CO2 (Carbon Dioxide).

Again, it is a nice and tidy concept. When it falls down is when this reduced concept is used to try to attack and defend the full complex concept developed by thousands of scientists using millions and possibly billions of points of data to explain the never before done experiment of what happens when we humans change the ratio of energy into our planet from the sun to energy out. At this point, the reduced concept become an absurd argument.

Logical Fallacy #15: Post-hoc ergo propter hoc

Post-hoc ergo propter hoc (often shortened to post-hoc) is an error in causality. Causality is a relationship of two or more events with a directional component, where one event causes the following event. This fallacy can be bidirectional, mistaking avoiding the first event as a valid way to avoid the second. It is reinforced by a poor grasp of which events are causal and which aren’t.

In a causal relationship, one event causes the next event. There is no wriggle room here, event A must be followed by event B so long as an external factor does not interfere with the usual occurance. The two events are separated in time, so cannot be simultaneous. A good example of a causal relationship is jumping and landing anywhere on Earth. If I jump (defined by lifting the entirety of your own body off the ground), then you will land so long as this is done anywhere on Earth and nothing prevents your landing (such as a cable, or someone catching you, upwards blowing air etc). The events must be sequential and proximal (connected in some way).

Post-hoc ergo propter hoc mistakes a causal relationship as demonstrated above, with two sequential events that do not have a relationship.

Two very similar sequences can demonstrate the difference between the two:

* I have diabetes. I take the proscribed doseage of insulin and my blood sugar level stabilises.

* I have a headache. I take a homeopathic remedy and my headache goes away.

On the surface these two sequences look the same, Event A was followed by Event B, which ended with result R.

A + B -> R

If the first example did not have event B, then result R would not occur (aside from another specific intervention). In the second example, denying event B (the homoeopathic remedy) still ends in result R.

Eg 1 : A -/> R (Event A does not lead to result R with the absence of B)

Eg 2: A -> R (Event A still leads to result R with the absence of B)

Reversing post-hoc ergo propter hoc can lead to some strange thinking, such as having to turn the door handle three times when locking it to ensure the house is not broken into. After all, in all the time I have been doing that, the house has not been broken into… so it must work, right? Here the mistake is to erroneously link Event A to Result R, thus believing that removing A will avoid R.

Turning the lock once was followed by a break in, thus turning it once causes a break in. Since then I have turned the lock three times and I have never been broken into, so it must work, right? Wrong, there is no relationship between Event A and Result R, merely a one of coincidence.

To avoid the post-hoc ergo propter hoc error, check to see if any research has been done to concretely link the two phenomena, or to demonstrate that it has not been linked. If your only evidence is “fringe” then it is probably wrong, follow the mainstream evidence (where fringe is defined as “scientist defies/’speaks out against’ the mainstream”, or “X with no scientific education is turning science on its head”). Remember folks, extraordinary claims require extraordinary amounts of evidence to support it.

Logical Fallacy #14: Non-Sequitur

Literally this means not in sequence, which is usually written as “does not follow”. This fallacy has a conclusion that is not connect to the premise. There are several types of disconnect – common, undistributed middle, affirming the consequent, denying the antecedent, affirming a disjunct and denying a conjunct. The commonality of all of these is that the argument A is not properly related to the conclusion B, thus B in not valid, or that assuming B cannot give validity to A.

– Common Non-Sequitur

This is simply where one thing has nothing to do with the other.

“The apple in the fridge is red, so the bee cannot pollinate the flower.” This can commonly be found being spouted by ‘gurus’ who are pretending to have depth, or by people who really do not understand how things are connected and attempt to make a connection that should be clearly not made.

– Affirming the Consequent

This non-sequitur assumes a bi-directionality of the consequences – what goes one way must go the other. On the surface this seems reasonable, yet when you consider how this looks in a Venn Diagram, where all of one circle is inside another, you can clearly see that this is not true.

Affirming the consequent
Venn Diagram used to understand non-sequitur’s

Here we have two categories, Red A and Green B. Let us say that A is animals with a vertebrae, and let us say that B is humans. All humans have a vertebrae (that is all of B is a member of A). Daisy has a vertebrae, therefore Daisy is a human. This seems true and accurate, yet when we realise that Daisy is a cow, we realise the mistake. Not all creatures with vertebrae are human.

 

– Denying the Antecedent

Another bi-directional error is denying the antecedent (first part) because the consequent is false.

Consider our diagram above – Green B is now people who have brown skin and Red A is people who have brown eyes.

The argument is this “If I have brown skin, then I have brown eyes” – this may be quite true. The non-sequitur is to then say “I do not have brown skin, therefore I do not have brown eyes”. This does not follow, because having brown eyes does not require you to have brown skin.

Another way to look at this is mathematically: If Alpha is true, then Beta is true. Beta is not true, therefore Alpha is not true. The second statement was not defined by the first statement, so it is not necessarily true – Beta can be false and Alpha can still be true.

– Affirming a Disjunct

This is an error in understanding the meaning of the word “or”. This error is usually cleared up in programming by the use of “or” which is inclusive, or “xor” which is exclusive.

Let us have two items A and B. If the statement is “A or B is true” and we are inclusive, then:

* A is true and B is false = True

* A is false and B is true = True

* A is true and B is true = True

* A is false and B is false = False

Where as if the “or” is exclusive, then

* A is true and B is false = True

* A is false and B is true = True

* A is true and B is true = FALSE

* A is false and B is false = False

Note the difference in the third phrase. By Affirming the Disjunct one is using an exclusive form of the “or” when an inclusive version is expected. This can also be seen as a false dichotomy, trying to create only one true answer.

So let us get rid of the maths and go with English.

“I am nice or male – I am male therefore I am not nice” – the or is ‘inclusive’, but the argument is using an exclusive version.

vs

“I am either male or I am female  – I am male therefore I am not female” – the ‘either’ is exclusive, so the statement is valid. I know that some people are defined as neither or both, yet on government forms in Australia, the logic is you are either male or female, and you must tick one, not both nor neither.

– Denying a Conjunct.

In this case, the statement is that both statement A and B cannot be true (exclusive or). The following statement “A is false therefore B must be true” is in error because B can still be false (check the chart above for exclusive or/xor).

An example of this could be “It is not the case that I am in a lake and at home”. This can be useful. The follow up statement “I am not in a lake, therefore I must be at home” is in error because I may be in an aeroplane, or driving. The statement is only useful if one of the statements is true, thus if I am in a lake, I am definitely not at home, and if I am at home, I am definitely not in a lake. I cannot, however, conclude that because I am not in a lake I must be at home.

Scepticism vs Rejection – The use of critical thinking

To the lay person, scepticism and rejection can look very similar. You hear an idea that doesn’t agree with your world view and you reject it. The difference lies in the reasoning, justification and method of rejection of the idea. This difference can be found in the application of critical thinking.

When something doesn’t align with our desires, the instant human reaction is rejection. This can be due to surprise or disgust, two of the universal basic emotions. Surprise is due to unexpected events creating a defensive reaction which may or may not be justified, such as a domestic cat racing out of the hedge in front of you, or a tiger doing the same. Disgust is the reaction due to a stimulus that crosses a taboo boundary – such as off food or a social negative such as passionately kissing a sibling. Either of these reactions generates an instant “no” rejection reaction in us.

This rejection is an un-contemplated stance. We just don’t like it. When pressed, we can create all sorts of reasons to justify our stance, especially when the stimulus is not so simple as a feline jumping out of a hedge, or off food in the microwave. When we come across an idea that engenders these reactions our justifications can become quite elaborate. The problem with this justification is it is cherry picking to confirm our reaction, rather than questioning whether our reaction is justified. If the the surprise is a domestic cat, it is quite easy to laugh off our startle reaction. If it is a tiger, it is quite easy to justify the fear reaction. If it is a mid sized dog… it could go either way. If the idea that startles you or creates a disgust reaction is akin to the ambiguous dog – then we tend to look for reasons that justify our fear even if it turns out the ambiguous dog idea is placid. We are looking for evidence to support our stance, not for evidence to test our stance.

The methodology of using critical thinking is a sceptics toolkit for checking to see if the rejection reaction was justified in the first place. If the evidence supports the idea that we want to reject, then the sceptic should accept that the reaction was wrong and the idea correct, or vice versa. This is not done blindly, but uses a series of tools and concepts known as “critical thinking”.

Here are some basic concepts of critical thinking:

* Start with the null hypothesis – There is no relationship between the two phenomena without good supportive evidence

* Everything can be wrong, even well documented and supported ideas – but the more well developed and supported with good evidence an idea is, the greater the burden of truth is to overturn the established concept

* If it seems like an easy or pervasive solution, it probably isn’t

* Suspect words and concepts:

– If the thing is not referring to sub atomic particles, then the inclusion of the word ”
“quantum” in the description is suspect.

– “Natural” or “Organic” implying greater health. There is no “unnatural” in this universe – if it is here, it is natural. “Natural” often means less processed and in some cases that is good, in others that is bad. While “Organic” in farming practices refers to a particular method of farming, there is no significant evidence supporting any improvement to health to the end user due to this method.

– “Scientifically proven” – The scientific method does not prove. It only disproves. It can certainly test a correlation for causative effect, and there may even be a paper written about it. Yet if there is no reference, then there is no value. Even if there is a reference, if you don’t check the paper and see what was actually written, then there is only little value.

– “You won’t believe” – Quite right. I am not going to believe an advert that positions me to automatically defend my ability to believe what you are going to say next.

– “Scientists can’t explain” – means that either there is no evidence linking the two things together, or scientists don’t consider the connection important enough to prioritise funding away from other things, like cancer research, climate change and so on, to investigate if the connection is real. Remember, the default position is that the things are not connected without some kind of tangible evidence to suggest that they are.

– “Alternate medicine” – why is there alternate medicine. If it works, it is called medicine. I drove on a bridge the other day, created using alternate engineering, and I drank water from a tap that was delivered using alternate plumbing… um, no.

– “Energies” – Which energy? There are specific types of energy that are known and measurable. If it isn’t one of these, feel free to substitute “runs on Unicorn vibes” for equal validity. I’d love it to run on unicorn vibes, but I would need some evidence that unicorns exist for me to put much trust in it.

– “X don’t want you to know” – why not? If what you were doing actually worked, X would be selling it. This is an appeal to the distrust of large organisations, corporations and governments. Yet every country is quick to point out how many blunders and errors their government makes, yet they can pull off a perfect conspiracy to block you from knowing one specific thing – so that these guys can spill their guts and make money? Um… no.

– “I’m a celebrity and I think X” – means there is no sufficient evidence to back up the claim, so they are appealing to authority, even when that authority has no actual knowledge of the field. I asked a physicist the other day about how I should best brush my teeth – and he said rotational brushing was optimal; yet isn’t the best expert to ask a dentist? Even if a dentist endorses a dental product, that doesn’t mean the product is, in fact, any better than any other product, or even works. It just means that professional was paid to say lines on an advert.

There are many more, but this is a quick introduction to critical thinking, right?

Here is the summary – Anyone can reject an idea for whatever reason, but a sceptic doesn’t just reject every idea – they test them. The sceptic may actually reject the rejection if the evidence supports the initial idea despite the initial reaction – because using critical thinking examines the idea and does not assume the reaction is correct.