Logical Fallacies, Informal

A formal Logical Fallacy is where you use logic incorrectly and it is always wrong. For example, the mathematical constant Pi is 3, or another example is domestic cats are bigger than fully grown cows. Generally, Formal Logical Fallacies are fairly easy to spot. What is much harder are Informal Logical Fallacies. Informal Logical Fallacies are where the reasoning for the logic is flawed, leading to erroneous conclusions. It is much harder to spot Informal Logical Fallacies because sometimes they are right, and sometimes they are wrong.

When we are trying to determine what is true, we will use a system of discussion where we look at facts, evidence and logic as the basis to then form reasons why something is true or false. We can easily use poor reasoning to justify a position in the discussion (formally, an argument where in good faith we are trying to determine what is true and false).

Some people are very prone to using bad faith to further their goals. Such people often use informal logical fallacies to make their argument seem reasonable and logical, but it is flawed.

We have gathered here the most commonly used Informal Logical Fallacies that are used in bad faith arguments, how to spot that you are using it, understanding the flaw in the reasoning and how to address or manage the bad faith argument.

Ad Hominem

When an argument is met not with revealing factual errors, revealing logical errors or offering workable alternate explanations, but instead with an attack against the person, this is an ad hominem (lit “to the person”).

Our Australian political debates are full of ad hominem fallacies. Instead of addressing the topic, the politician attacked their opponent.

A common example is this:

Argument: “Climate change is no longer a scientific question, it is a scientific fact due to the overwhelming evidence that supports it”.

Ad hominem fallacy: “You must be stupid to believe such trash”.

The intelligence of the person has nothing to do with the validity the argument. Instead of attacking the argument, the respondent is attacking some aspect of the person. It makes just as much sense as saying “your point is invalid because you have blue eyes”.

Ad Ignorantiam

President Trump didn’t understand the complexity involved, thus no one knew – Logical Fallacy

Literally “Against Ignorance”, this fallacy assumes the truth of a statement for no direct evidence against it. For example, the Flying Spaghetti Monster is real, because science can’t disprove it.

A better stance to start from is a version of the null hypothesis. That is, if there is no evidence for it, then assume it doesn’t exist until you find some evidence that maybe it does. Please note that evidence means more than “look – it moved”, but rather a repeatable experiment with conclusive data that is tested by multiple people under proper conditions.

If you feel that your statement might borderline on this logical fallacy, insert Flying Spaghetti Monster or Rainbow Pooping Unicorn and see if the same logic you used proves that Flying Spaghetti Monster’s or Rainbow Pooping Unicorns now exist.

Appeal to hypocrisy – Tu quoque

Tu Quoque is Latin for “you too” or “you also” and is an appeal to hypocrisy. There are two flavours to this logical fallacy. The appeal to the common error in both sides of the discussion to excuse a mistake, and the discrediting of another’s failure to act consistently to their position.

The Pot Calling the Kettle Black

This is tantamount to a child in the playground excusing their bad behaviour with “X was doing it too!” Regardless of who else makes the error, it was still bad behaviour.

In the instance that a member of a discussion is caught using faulty logic or poor evidence, the perpetrator can either plead guilty, or may attempt to justify the error by citing the accuser of making the same or a similar error. This is a specific ad hominem attack attempting to shift the focus away from ones own mistakes to target the mistakes of another. In one flavour of this, the other may have indeed been mistaken, and in the other flavour, they are not. Either way, the fallacy is the same. Instead of addressing the noted error, the defendant diverts attention to some other error, whether real or not. The solution to this as someone who is called out on an error is to address the error. If the person receiving this logical fallacy, keep the focus on the error noted first, then address the accusation afterwards.

Dismissing the the Position

In this form of the logical fallacy, the position is ignored because of a self referential error in the arguments. This is a special case of the Fallacy Fallacy in that the fallacious argument becomes the focus instead of the position, which is dismissed. This can be best demonstrated in the following example:

Person A makes criticism C.

Person A is also guilty of C.

Therefore, C is dismissed.

Person 1: Drinking is bad for your health

Person 2: But you’re drinking!

In this case, the premise is that drinking is bad for ones health, which is dismissed by the second person because the arguer is currently drinking. Just because someone is guilty of participating in the activity or idea that is described does not make the activity or idea wrong. I can equally say that bashing my head against a wall is bad for your head whilst bashing my head against a wall. That doesn’t mean my head is not damaged by bashing my head against the wall.

Argument from Authority

An argument from authority has two forms. The most common is to state that because the arguer has a credential of some kind, then the statement they make must be true. The alternate is to render a statement made by someone without a credential false.

The truth of a statement does not depend on the credential of the person who makes the statement. For example, if a professor of physics asks their two year old to repeat a statement about gravity, neither the professor of physics nor the two year old are necessarily telling a truth or falsehood due to who they are. Instead the statement itself must be true or false according to current scientific evidence.

In the above example, the professor of physics is far more likely to give an accurate statement about physics than he would about the current state of political support for science. It is tempting to laud his credentials when making a statement about science funding, as if he is an expert on that too. He isn’t.

When an expert witness is called, s/he must still be able to reference the source of their expert knowledge. This isn’t to say “I have a PhD in Physics”, it is to say “this fact is backed up by this evidence found from these experiments performed by these scientists”. An exception to that would be a commonly accepted idea in physics, such at the Theory of Gravity. Even so, s/he should be able to explain the methods of experiments done to test the theory and be able to point to peer reviewed publications on that topic.

Frequently an expert in a school of science is asked on the media to comment on a different school of science. This is poor form. They aren’t qualified to be an expert for that form of science. The assumption of accuracy of their statements based on their qualification is illogical. It is like asking a meteorologist which tyres to put on the four wheel drive. After all, they are an expert, right? Wrong. Yet an expert in meteorology may also be a four wheel drive enthusiast and be able to reference where to find the information, why these tyres are good or bad and so forth. Yet you wouldn’t say “John Smith, a meteorologist, recommends these tyres”.

By the same token, in the example above, the two year old’s statement isn’t necessarily wrong because it came from a two year old. The speaker of the information does not equate to validity, the science behind the statement (which is person independent) equates to the validity of the statement. That is, if there is not scientific evidence supporting the statement, then it has no credibility regardless of who makes the statement; and if there is scientific evidence supporting a statement, then it has credibility regardless of who makes the statement.

One hopes that a relevant expert in the field would know the topic better and not leap to faulty conclusions. Unfortunately that isn’t always true. Frequently the media will misunderstand the science involved in an experiment, not understanding the need for good methodology, peer review, the affect of sample size on the validity of results or even the difference between writing about the results and speculating on what it might mean or where to go from here.

A good rule of thumb when reading a mass media write up of something in science (or an infographic) is to check for references to a well known scientific journal. If there isn’t one, take the article/picture with a pinch of salt. If there is a reference, check out the reference and see if what was written reflects the abstract, and then the actual article itself.

Argument from Final Consequences – The Teleological Argument

Teleology is the philosophical idea that events lead to an ultimate end and finality. To be final requires some form of destiny. To have destiny implies a plan, which requires the will of a designer or planner, generally seen as the creator or a god. The error is to see the end result requiring the initial conditions rather than recognising that multiple different initial conditions can end in a similar result – that is, this philosophy has the events and consequence equation back to front. The philosophy has a number of other issues, but when it comes to the Argument from Final Consequences, this is the bit we are interested in.

If I bump the cup off the table, then it will fall. Unimpeded the cup will fall to the ground and smash. Each event has a logical and predictable next event, which results in a final consequence. Yet it doesn’t. Their is no final consequence because the smashed cup will now be swept up by someone and put in the bin, the bin will be taken out to the street curb, the waste disposal company will pick it up and eventually someone may find the broken pieces of the cup in land fill many years later. And then something else will happen…

Final Consequences suggests that from the broken cup on the ground, one can determine that it was knocked off the table. Seems logical, and that may be true. Or someone may have misjudged the table and let the cup go at the side of the table when putting it down. More importantly Teleology suggests that if the final conclusion of the broken pieces of the cup are the land fill, then when the broken pieces are found, the examiner can figure out which table it was knocked off, because that is the only explanation that fits the destiny of the cup. Clearly that is farcical.

Teleological destiny precludes alternate chains of events leading to the specific circumstances that have resulted. Destiny requires a fixed outcome, which not only removes self will, but also implies that all the universe is fixed in someway – that what happens next is not only the only way it could have gone, but that it can be predicted by some being with sufficient resources. We refer to these beings as gods. The implication is generally that the god chose the outcome, and knew it was coming. A single fixed chain of events requires a fixed universe since any non-fixed chain could interfere with the fixed chain, which creates paradox.

Even if a god being able to predict the future were true, and there is no evidence to support this, it is then extremely arrogant for a human to assume they have this ability to understand the subtle causal chain of events that only a god can govern to backtrack the exact and specific events that lead to this outcome.

Examples of Argument’s from Final Consequences:

  • Humans exist on this world, so the world was created to support humans, which means the universe was created to support this world and thus also support humans.
  • Any woman who is raped was asking for it
  • My winning the lottery happened because I had a miserable life

In all of these, the outcome is being used to justify the course of actions that the arguer believes to be true, regardless of any presence or absence of evidence.

Argument from Personal Incredulity

This logical fallacy is based on the arguers ignorance or limitations leading to an amazing solution or solutions. The phrase “I cannot conceive” or “I can’t imagine how” leads to a conclusion of “therefore [non-evidence based solution] must be it” or “so no one can know”. This can also include a denial of the given evidence supported explanation because the concept does not fit within the acceptable paradigm of the arguer primarily due to their inability to understand the logic behind the standard scientific explanation.

Generally the people using this form of logical fallacy display significant arrogance in thinking that any solution beyond their grasp must mean it is beyond any single humans or group of humans grasp, and thus must have an amazing solution such as magic, aliens or mystical beings. A example of a common argument is that there is no simple explanation for how humans came to be on this planet, that evolution is too complicated and doesn’t seem right, and therefore we were seeded by aliens, gods or just created by some all powerful force. There are many scientists and lay people who do understand evolution and the mountains of evidence found supporting this Theory. The individual arguers inability to understand this well known phenomena does not preclude others abilities. Yet somehow that incredulity supports their own belief in a simple solution with zero evidence to support it.

There are many concepts that I have troubles grasping. Lately I’m working through the difference between my high school understanding of “matter” – having “mass” and “volume” and some kind of “solid boundary” between the inside of the particle and the rest of the universe – and what seems to be the reality of subatomic particles – where “mass” is merely a means of interaction, “volume” is a proximity of interactions of various forces and there is no “inside”, so no “boundary”. I have the option of denying the work of thousands of physicists and their mountains of evidence simply because I can’t understand this and going back to my immature high school beliefs, or I can accept that I don’t understand this, but someone does.

The logical fallacy version of this would be to state that simply because I can’t understand it, no one can, and those who claim they do are wrong because my simpler answer without evidence trumps it.

If I were to say this, I would be demonstrating significant arrogance.

Begging the Question

“Begging the Question”, as a logical fallacy, describes polluting the question with a built in partial answer or a repeat of the initial statement. It comes from the Latin: petitio principii – literally “assuming the initial point“. This assumption of the initial point bypasses error checking that particular point. This results in possible answers to the question being as false as the assumption built into the question.

There are two main types of Begging the Question. The first has a conclusion which depends on a faulty assumption. The classic question used to highlight this logical fallacy is “When did you stop beating your wife?” The implication of this question is that wife beating has occurred, reinforced by the question “when did it stop?”. Assuming the initial point – wife beating – any answer to this question relies on the assumption being right. If it is wrong, then the answer will encompass an error about when such occurrences will stop, as they never started. If the assumptions is correct, then the answer might be correct.

I’ll use a bit of logical math at this point. A + Q = C. ‘A’ is an assumption, ‘Q’ is the question, ‘C’ is the conclusion created based on the assumption ‘A’. If ‘A’ is faulty, then ‘C’ is faulty.

The second version of Begging the Question can be far more subtle. This is where the statement includes a redefinition of itself. “Sleep medicines are all those which induce a soporific effect.” Soporific is another word for “sleep”. So “Sleep medicines are defined as medicines which cause sleep.” No kidding.

Going back to math, D = D, therefore D, where ‘D’ is a definition. This doesn’t actually tell us anything new beyond ‘D’, which we already knew. Yet this logical fallacy can often be used to imply a grater level of knowledge that quite frankly does not exist in the statement.

These fallacies are frequently used in arguments to imply that the defendant is in a minority. Examples of these minorities are that the defendant is holding out, or is in the recognised wrong. For example, Dr Steven Novella was asked by Dr Oz “what are alternative medicine sceptics (termed ‘holdouts’) afraid of?” This effectively suggests that people sceptical of alternate medicine are holding out on the truth of alternate medicines, implying great success with them and that the sceptics are holding out on this great truth. The assumption ‘A’ in this case is that Alternate Medicines work despite the minority view held by professionals. This belies the complete lack of any actual evidence of efficacy of the alternate medicine – that is, all double blind trials show no greater effect than chance or placebo.

By “holding out” and being “afraid”, the question begs an inadequacy of those asking for evidence of these ‘medicines’ having any measurable effect. Certainly if I were to pay money for a product I would want to know it would work, or at least know the range of its effectiveness. For example, if I purchase a car I would like to know that it turns on, moves forward, is in good mechanical repair, how fast it goes, how many people it can carry, how much fuel it uses and so on. I could ask a car salesman what this information is and they would give me the same answer as the salesman in the next car yard for the same model – because it is known. If I were in doubt, they could show me the manual, specifications and so forth. If the car salesman were to ask me why I was so afraid of the efficacy of his car and what I was holding out for, can I please purchase the car now… I wouldn’t pay a cent.

In the case of alternative medicines, two different ‘professionals’ will likely give you two different answers, because there is no data on these aspects. Even if they gave you the same answer, you couldn’t check where they got it from, because other than the un-tested claims of the manufacturer, there is no data on effectiveness.

A pharmaceutical medication, that is medication prescribed by a doctor, will have certification, testing and trials indicating the effect, side effects, effectiveness and so forth of the medication. Without these trials, the medication cannot be prescribed. Even if the trials are faked (and sometimes they are), follow up trials or lack of efficacy in the field prompting re-trials, with the result that these medications are taken off the shelves. This allows you to have a high degree of confidence that the medication prescribed will work as expected, and if you are an exception to the rule, the prescribing doctor will note the lack of effectiveness of the medication and put you on another drug. None of these steps generally happen in the ‘alternative’ medicine industry.

All of this gives you an idea of why Dr Oz’s question, assuming alternative medications to be effective, and thus Dr Novella being ‘afraid’ or a ‘holdout’, gives completely the wrong impression without putting up any evidence at all.

Mistaking Correlation with Causation

A causal relationship between two events is where the first event causes the next event to occur. A correlative relationship is not a relationship at all – it is two events coinciding in a situation that can be mistaken for causal. People frequently mistake a noted correlation of two events as a causative relationship between the two events.

Relationships between two events come in four flavours:

  • No relationship at all = randomness / coincidence
  • For example, my eyes are hazel and the stranger I just passed is eating a sandwich
  • An apparent relationship, with no causation = correlation
  • For example, a survey of contents from stomach pumps at the local justice centre found the presence of carrots in 100% of inmates.
  • A complex causal relationship, causation is established, but the exact mechanism is not = contributing factor
  • Levels of bowel cancer and the presence or absence of roughage in diet
  • Increased levels of CO2 in the atmosphere and the increased temperature of the world
  • A standard causal relationship, causation is not only established but the mechanism is known and understood = causal relationship
  • Placing a transparent vessel containing both hydrogen and oxygen in ultraviolet light forms water and heat… explosively
  • If I lose 6 litres of blood from my body, I will die

We humans like to see patterns in things. Patterns are the first part of predicting future events, which can allow us to change our behaviour now to alter the predicted outcome later. An accurate pattern is a useful tool. However if the pattern is false, changing our behaviour now will not have the desired outcome, so we should recognise this error and let the pattern go.

The temptation is to mistake a perceived pattern for a tool that can accurately predict the future with a deficit of evidence, or to hold onto such a mistaken tool in the face of suitable contradictory evidence.

In the example I gave regarding the presence of carrots in people incarcerated in correctional facilities, it would be a mistake to consider that carrots lead to criminal activity, or that criminal activity leads to the presence of carrots in the stomach. Another example would be to look at the statistics of belief that criminals have and criminal behaviour. In the USA, the vast majority of criminals state that they are Christian while almost no inmates state that they are Athiest. One could conclude from these two bits of data that Christianity is a contributing factor in criminal behaviour, or that being Athiest minimises the likelihood of criminal behaviour. This conclusion is clearly in error. Criminal behaviour causes criminal behaviour, your belief system, or lack of it, is irrelevant.

Another common error is to attempt to make a mechanism for a correlation to support causation. For example, you may correlate your behaviour with an observation of the fullness of the moon. You may feel odd or bizarre and noticed that this happened last time the moon was full, just like it is this time. This seems to be a pattern, correlating the two events together. To explain this pattern you create a causative link between the moon and your behaviour. To explain that causation requires a mechanism, so you suggest the ionisation of the atmosphere increases due to the reflected light of the moon, or the increase in gravity during the full moon and the new moon affects the water in your body, or some other mechanism. Perhaps one of these is correct, but probably not.

The error is not in speculating about the mechanism, but in ascertaining confidence in the speculation without testing, and in the correlation in the first place. In this case the ionisation of the atmosphere due to reflected light is negligible in the face of other factors such as solar storms. If you were to react to ionisation shifts, then the change in levels due to sun light would create massive mood swings in comparison to any shift from the reflection of the light from the moon. As for gravity, you get a stronger affect from being near a mountain than from the combination of the sun and moon, so your mood would shift every time you got near a mountain – yet it doesn’t. Also, step back a bit – is there correlation of your mood and the moon real? Write down a chart of your mood for a year. Then compare it to the calendar and see if there is a definite correlation with the phase of the moon. The odds are against such a correlation existing. If it doesn’t, then your observed pattern is false. If it does, then perhaps there is a pattern to test, but not the whimsical causal chain you created.

Of course another consideration is that perhaps your mood causes the full moon… yet your mood changes frequently while the timing of the full moon does not. And let us not get stuck in the various definitions of what constitutes a “full moon”.

The Logical Fallacy of mistaking correlation with causation is using a correlation or random coincidence to substantiate a conclusion in the mistaken belief that there is a causal relationship between the first event and the second.

Confusing Unexplained with Unexplainable

On the surface this logical fallacy seems very simple. The faulty logical proposition is to suggest that since we can’t currently explain the mechanism of an observed phenomena, it is unexplainable, thus since we know it happens, it must have a miraculous explanation. This also tends to rely on the assumption that the described phenomena and explanation is correct rather than mistaken. The proposition denies the ability to learn more about an event, denying future scientists and explorers from discovering the mechanism.

As with previous logical fallacies, always check if the basic assumptions are true. There is no point trying to find the mechanism of a phenomena if the phenomena is not what is described. For example, the mechanism for how crystals can help someone levitate is unknown… because crystals have never been demonstrated to help someone levitate. Or the mechanism for how the quartz crystal Guru Davi turned water into wine is unknown … because it wasn’t the crystal, nor was it water, nor was it wine. There is no point looking for the mechanism if the phenomena does not exist, or one that is misreported.

For those who are interested, the trick with the water into wine is to put phenolphthalein into the water first. Phenolphthalein is an indicator chemical that turns red/pink in the presence of basic solutions. Water is neutral, so it doesn’t have a colour yet. Now the quartz crystal is coated in sodium carbonate. When it is put in the water, the sodium carbonate dissolves in the water and turns the it into an alkaline (basic) solution, which turns the phenolphthalein red. Don’t drink it – it isn’t wine. Trying to figure out how the water was turned into wine assumes the liquid started as pure water and ended as wine. That is a faulty assumption.

The second part of the fallacy is to assume that simply because an explanation has not been found to describe the currently observed phenomena (when that phenomena or occurrence is reported and measured accurately) means it can never be explained, that is, that it is unexplainable – ever. Since the phenomena occurs, this must be an act of mystic beings/magic/aliens.

This gross assumption of unexplainability of an actual phenomena is foolish. Every time you learn a new skill you are able to do things you could not do before. Every time you learn a new fact, you know something that you did not know before. Some of you reading this are just now learning about this logical fallacy. If you were incapable of change, of learning, you would still be as helpless and ignorant as a new born baby. Clearly you can learn, grow and develop … so can humanity and human knowledge. The things that we know today about physics, chemistry, engineering and so on would absolutely awe an individual from a mere 100 years in the past, while it will be amusing to someone 100 years from now. So to deny the ability for humanity to learn more in the future and one day be able to find the mechanism for a phenomena is very limiting.

It is not impossible that a god, or magic, or aliens have had a direct part in any particular phenomena. It just isn’t probable. Consider aliens. The logistics of travelling from one star system to here using what science we know now is a significant venture. If they did so, why would they leave something odd behind for us to mull over instead of a clear statement of “we were here”. If aliens are able to travel between star systems instantly by super science means, then why aren’t we seeing more of them, instead of some vague weird phenomena? The simpler explanation is that this is a perfectly ordinary occurrence that we just haven’t figure out yet – like children watching the stage magician.

It is very appealing to look for a fantastic explanation rather than accept that at this point we are simply ignorant. Personally I prefer to accept my ignorance, because then I can learn how it was done. If it is a mystic being, or aliens, or magic, then I probably can’t.

Errors in Analogy

There are two main ways to explain something. Comparing and contrasting. You compare to something that you know and fill in the differences, or contrast it to something that you know and fill in the differences. An analogy is a mechanism for comparing to something that you do know. It comes from the Greek word analogia, which means “similar too”. This logical fallacy corrupts the useful tool of the analogy and extends it down three possible faulty paths.

The left fork in the analogy path is taking the analogy too far. The right is to mistake the analogy for the event. The middle path leads to analogies that have nothing to do with the thing you are trying to discuss, primarily due to ignorance.

Taking the Analogy Too Far: This is where an analogy begins as a useful tool to understand a concept or idea that is unfamiliar to you by placing it in a familiar setting. This initially boosts the speed at which the idea can be absorbed. A good example would be to compare gravity to a distortion in a rubber sheet. This quickly gives a two dimensional sheet a third dimension that changes the path of a marble that is rolling along the sheet and deviates into an orbit around a large body (like a bowling ball). Very quickly you see how a similar distortion in space caused by our star would bend an object travelling through the solar system from a straight line and curve it into an orbit. Taking the analogy too far is to ask what happens when the object is so massive that it rips a hole in the rubber sheet, or placing all of the objects in our solar system in this model and watching them all sink into the central sun – which clearly doesn’t happen.

Mistaking the Analogy for the Event: The analogy is chosen for a conceptual similarity to the process you are trying to describe. Take the rubber sheet above. It has a similar concept of distortion of a surface (space). The error here lies in thinking that space is a rubber sheet and giving space the properties of that rubber, that large objects on the sheet of space are protruding into some other space, or that we will find that material of rubber if we look at space the right way and so forth. The concept that was analogous was the way objects in motion are distorted because of the stretching of space – the rest is not the same.

False Analogy: Trying to create an analogy that is not actually similar to the process you are describing, because you don’t understand the process you are describing. A common one is to suggest that as far as evolution is concerned, an organism evolving into an human by chance is the same as tornado ripping through a junk yard and making a 747. The faulty understanding is to consider that evolution is complete chance, rather than an evolution of working features that are mutated by chance, where the working features continue and the faulty ones fail, and where the definition of “working” isn’t the same as “wanted”. A better analogy would be to describe how water carves out a gorge from a mountain. The path is not predetermined, however strengths and weaknesses in the rock will resist or succumb to the water, creating a beautiful work of art without the requirement for an artist.

Remember that the correct use of an analogy is a powerful teaching tool. The misuse of the tool can create problems.

Fallacy Fallacy

This logical fallacy focuses on the dismissing of outcomes due to a poorly constructed, or ignorant presentation, of the evidence or logic to support the outcome. Frequently this fallacy is mistaken for using a logical fallacy incorrectly or too rigorously, but it has more to do with throwing out the baby with the bathwater. The baby (the result) is still correct and useful, even if it is surrounded by dirty water (poor evidence and or poor logic and or not being an expert in the topic). It is important to recognise the outcome despite these factors.

I may perform a series of poorly conceived experiments to provide evidence of the strength of gravity. I may even get an accurate result despite the poor construction of my experiments. The correct result should not be dismissed just because of my poor experiments. Instead my experiment should be dismissed because of my poor experiment and the support of the accurate result should be negated. The Earth’s gravity is 9.8 m/s^2 at the surface, regardless of how poorly I experimented to get it. My poor experiment could also have ended up with 5 m/s^2 at the Earth’s surface, which is wrong and can be dismissed, or the experiment may have shown that I only get 9.8 m/s^2 if I stand on my left foot (the left foot part can be dismissed). If this experiment was the only one testing the Earth’s gravity, then the accurate result of 9.8 m/s^2 can be questioned because it is in isolation – so the 9.8 or the 5 have equal weighting. Experiments on Earth’s gravity are not uncommon though, so it can be quickly identified that 9.8 m/s^2 is accurate despite the poor experiment. The Logical Fallacy Fallacy comes in when I attempt to say that clearly Earth’s gravity is not 9.8 m/s^2 due to this poor experiment, despite the mountains of other experiments that actually demonstrate this is accurate.

Similar to the poor experiment version, I may attempt to use logic to demonstrate that the sky is blue. The current leader of this country is an idiot, therefore the sky is blue in the daytime on a clear day. In this case, the logical fallacy used here is a non-sequitur, attempting to link the idiocy of the current countries leader to the colour of the sky. Whether the countries leader is an idiot or not, or if the leadership were to change to someone else or not, or if there were not leadership in the country has absolutely no bearing on the colour of the sky. Yet the sky on a clear day is blue. Dismissing this as false because the supportive premise was faulty is not accurate and is the second example of a Logical Fallacy Fallacy.

I am not an expert on climate science. Nor am I an expert on biology, geology and a host of other sciences used to determine the current theory of Global Warming as part of Climate Change. I have a slightly above average layman’s knowledge of this field, enough to know that the threat is real and that we should act. I can generally spot bogus anti-climate-change claims. Not being able to spell out the specific mechanisms and evidence supporting the conclusion of the vast majority of the worlds experts on this topic does not mean that they are wrong, it merely means that I am not an expert. This form of the Logical Fallacy Fallacy is again attempting to dump the baby (the mountains of evidence supporting global warming) because of the dirty water (my lack of specific expertise). It is a common tactic of the non-believer to find someone who is not an expert and dismiss the findings of the experts because the non-expert cannot adequately explain to them the science behind the evidence and conclusion.

False Continuum and False Dichotomy

I am going to lump two different Logical Fallacies together because they both have to do with how we class objects.

We humans like to categorise things. We like to parcel ideas up and put them in labels, and then we like to parcel labels up and put them in categories and so forth. In reality every item in this universe is a discrete entity, which shares some characteristics with at least one other thing, but will always have some kind of difference to those other things.

That is pretty heavy going, philosophically, but try to stick with me here. We get a range of items from a tree which we call apples. Each apple from the tree is different to each other apple, yet we call them apples because basically they are the same. We can look at these objects compared with similar types of items from two other trees. One of these similar items is close to the same, but has red skin instead of green skin. Another has orange, but has different insides. We class the two similar ones as “apples”, but separate them based on their colour (or even their specific species, but still class them as apples). The orange coloured things have a separate name as well, and since we are simple we will just call them Oranges. All three fall into a category called “fruit” due to a similarity in form, location found and some other properties. Yet when comparing two seemingly equal things, they are still different, even if that difference can only be detected because they are in different places at the same time. It makes sense to call the collection of very similar items a single name such as “apples”, and a looser collection of things a looser name such as “fruit”.

The colours of the rainbow are on a spectrum of frequencies, from red up to violet. In between there are a range of other colours that we humans detect – red, orange, yellow, green, blue, indigo and violet. We can see that in this spectrum, which is a continuation from red to blue, or a continuum, are in between frequencies that are also part of the “visible spectrum”. It doesn’t jump from one frequency, miss a few, and then go to the next set. The more we look between each band of light, we find more bands of light. That is, there is no empty bit where there is nothing. So visible light is an excellent example of a continuum.

Fruit, on the other hand, is not. There is a gap between where you class a fruit as an apple and a fruit as an orange. Maybe in the past this was not so, but now there certainly is. When we try to look at a different fruit, such as a pear, and compare that to the apple, we find it hard to define exactly why one is a pear and one is an apple, yet we do. There is a fruit or two we could define as in-between, but not a continual spectrum of fruit. Fruit exists as discrete units on a spectrum, rather than a continuum on that spectrum.

One of the Logical Fallacies has to do with attempting to class apples as pears, because the edge of where one ends and the other begins is fuzzy and hard to define. Similarly one could try to state that the colour red is actually the colour yellow, because the point where red becomes orange is impossible to distinguish, and the point where orange becomes yellow is also impossible to distinguish. So red is orange, orange is yellow, therefore red is yellow. Clearly that is not the case – it is false. A classic example of this in mundane life is to look at cults and religions. It is hard to define the difference, so one could mistake the two for being the same thing. Yet they aren’t.

The other Logical Fallacy is to insert a gap in the spectrum that doesn’t exist. Going back to our light spectrum, the colour red is at one extreme and blue is at the other. The False Dichotomy implies that there is no colour in between. You can choose a colour – but there is only really red or blue, so pick either of these. I like to sometimes mess with peoples heads and add a third option in that is equally distant to the other two, changing my one dimensional spectrum into a two dimensional triangular spectrum and give them a false trichotomy. Maybe I’m just mean. (Feel free to a forth and go for the quadchotomy :-D)

The False Dichotomy asks you to choose between only two alternates when there is a spectrum of choice to make. A subtle side trap of the false dichotomy is that it is very human to draw a direct line between the two options offered and only select on that spectrum, when really there are other options too. Perhaps you want a yellow and green striped colour selection. Neither of those colours was on offer, nor was combining them in an interesting way, and the violet falls way out of the offered spectrum of choice. The False Dichotomy would resist this selection, trying to push you into the binary choices offered. You either worship god or you fear god. Perhaps neither of these are true for me, yet the Logical Fallacy erroneously pushes me to select one of them.

In summary, the False Continuum and False Dichotomy Logical Fallacies have to do with misusing our classification systems of items. The point of categorisation is to simplify data processing, but that can be a trap if we over simplify or misrepresent the information or the choices as a consequence of that classification.

Genetic or Origin Fallacy

The Genetic of Origin Fallacy is the false idea that original definitions constrain current definitions. This denies the evolution of an idea, constraining it to it’s infant definitions. Ideas grow and evolve, so referring back to the source of the idea to undermine the latest research and ideas is a false use of the concept.

This is a frequent logical fallacy that I have spotted historians attempt to use when looking at the psychology of historical figures. In history, the older a source is, the more accurate it is. That is the word “source” – it’s origin. That’s great when trying to work out what happened many years ago, however that is not good when trying to analyse something based on the first conception of that idea. For example, attempting to understand the psychology of why people have done what they did should use the most up to date concepts of psychology rather than the original Freudian Psychoanalysis. Going back to the source of modern ideas does not give you a more accurate concept, it gives you a more ‘primitive’ concept that is now outdated and has many corrections.

Another attempt to use the Genetic or Origin Fallacy is to suggest that the modern conception of evolution is faulty because Charles Darwin made some errors in his original theories. Of course there were some errors which have since been discovered and corrected for. If Darwin had conceived of the entirety of the concept and written an immutable copy of the concept in his book, “The Origin of Species”, then the scientists that have been researching and refining the concept over the last hundred or so years have wasted their time. Just go back and read the book instead of continuing research and refining the idea! This isn’t the case – so much has been learned from the launching pad of “The Origin of Species”. The same is true in all schools of scientific knowledge.

Words also evolve. If you look at the original etymology of words, you will find that many words no longer mean what they use to. Decimate is one of my favourites. Etymologically speaking, decimate means “to kill 1 in 10″. So when your forces are decimated, you lost 1 out of every 10 soldiers. Nowadays the word is generally used to mean “massacred”, which means “deliberate and brutal killing”, which implies complete or near complete devastation. When people use “decimate” in a modern context, one must consider that they probably mean “massacre” rather than “1 in 10″. To hold a modern use of the word “decimate” in error because it use to mean something else is an error in constraining the modern word for the original word, or it’s genetic origin.

Inconsistent Rules – Goose and the Gander

A moving decision boundary will create unpredictable results and poor decisions. A fixed decision boundary can be created by having a set of criteria defining when the decision is on this side or that side of the boundary, which define the consistency of the decision. The quality of the criteria defines the quality of the decision. The fallacy is to have a poorly defined boundary, allowing for favouritism and cherry picking, leading to a poor argument.

Let’s unpack that a little. As you approach the traffic lights, there comes a point where if the light were to turn amber, attempting to stop becomes dangerous. A practised driver learns where that boundary is and will consistently know when they have reached it. Outside influences can change that boundary – the weight of the vehicle, the speed of the vehicle, whether it is raining or dry, the density of traffic and so on. However given the same circumstances the decision point is the same – on this side of the line it is safe to stop, on the other it is safer to keep going. For most people, the boundary is intuitive – we learn through experience when we have hit that point.

Science is a philosophy that attempted to take intuition out of the equation. If you cannot explain how you know that this is the decision point, then you cannot teach it, or test if that is actually accurate. After all, you may have a very unsafe decision point to stop or start… it would be nice to know where that is before finding out the tragic way. As such, the philosophy behind science requires various questions to be asked, ideas to be tested and so forth to create rules behind defining the boundary and testing if these rules create a good boundary or a poor one. One could say that if the traffic light turns amber (from green on the way to red) then if your vehicle is 50 metres before the solid white line, it is a good place to break, after the 50 metre line it is not. This rule can work very well for certain loads, road conditions and speeds, but very poorly for others. The rule is consistent – but not necessarily creating a good quality outcome. Thus the rule needs to be refined, if your weight is less than 2,000 kilograms and your speed is less than 80 kilometres per hour, then 50 metres may be the best spot for the decision boundary, however for every 500 kilograms add an additional 10 metres and for every 10 kilometres per hour add another 10 metres… and so on. Testing these parameters is quite easy and following the set of rules leads to a consistent good quality outcome and then allows the initial rule to be modified based on the test results.

This example seems like a simple one to analyse and quite obvious. Another obvious set of rules is around medication. A medication should have a primary desired effect for a dose for a certain percentage of the population with minimal negative side effects, otherwise it should not be prescribed by a medical practitioner. The creation of the pharmaceuticals should meet certain standards to ensure that the effect of the medication is what is predicted by the clinical trials rather than being biased by some random element introduced to pack out the active ingredient to make the drug easier to administer. Seems sensible – you know what you are taking, you know what it does and you know how to measure its effectiveness. The rules for governing effectiveness and for creating the medication are consistent, which should give a consistent outcome. These rules have been refined over time to ensure a high quality result. Combined, the vast majority of prescription medication is both consistent and good quality. The factors the interfere with the consistent quality is for more variations in humans rather than medications.

People take chemicals for medical conditions that are not prescribed. These are often in the form of supplements or “alternate medicines”. When the medical professionals or governments ask for these to be controlled by the same set of rules, the alternative professionals raise arms citing this is unfair or oppressive. Independent tests of supplements have found that a concerning proportion of supplements do not contain the active ingredient listed on the container, or do not contain the dosage listed (either too high or too low). There is no clinical evidence that the alternative medication or supplement has any therapeutic effect and no control on manufacture methods, which has lead to the inclusion of allergen fillers. If the rules used to govern these supplements and alternative medicines were applied to the prescribed medications, people would die, professionals would be sued and their would be a government inquiry. So why are the rules for the goose not being applied to the gander?

As an aside, to quote Tim Minchin in Storm – “Do you know what we can alternative medicine that works?… Medicine.”

Back to the point in all this. Any time an argument is made that uses inconsistent rules, that argument has an inherent logical fallacy. Keep in mind, that while consistent rules will give a consistent outcome for both the goose and the gander, if those rules are terrible, the result will be also terrible.

No True Scotsman

By adding the word “True” in the definition, any examples brought forth to refute the definition are discounted because they are not examples of the true definition. This means the definition cannot be tested and negates the ability to discuss.

The title comes from the classic example of “All Scotsmen are brave”, ‘X is a Scotsman and isn’t brave’, “Then X is not a True Scotsman”. Even if X was born and bread from a long traditional Scottish line, X is now defined as not a True Scotsman because they don’t fit the definition. Another common example is “Schizophrenia is a chronic disease”, ‘Y was diagnosed with schizophrenia but then got over it’, “then Y either went into remission or never had schizophrenia to begin with”. The counter example is discounted because it doesn’t fit the definition of “chronic” (pervasive and life long).

If I define gravity as an attractive force between any two masses, then that is what it is. The falsifiable test for this idea is to try to locate a mass that does not respond with attraction to another mass. If I find matter that does this, then clearly my definition of gravity is in error and needs to be modified, or the definition of “mass” is in error and needs to be modified. One does not simply exclude the counter example because it isn’t “True Gravity”.

In other words, if all counter examples are excluded, then the definition has no real meaning. “All True Beds have a monster under them”, ‘I found no monster under my bed’, “Well then, it isn’t a True Bed, then is it?” This assertion can never be disproved, while the assertion that all beds have monsters under them can be. By framing it in the True Scotsman variant, the burden of proving the statement is removed as the evidence is discounted.

Moving the Goalposts / Shifting Sands

Originally from the British phrase where a goal post in a foot ball based sport is moved to advantage one side and disadvantage the other side, this logical fallacy denotes the situation where in the process of making an argument, the goal is shifted to make that argument either easier or harder. The fallacy resides in the moving of the goal whilst the process is taking place, rather than defining the goal first, then following the arguments to reach the target – either to support it or discredit it.

You may recall in the animated movie Robin Hood, by Walt Disney Productions, where Robin Hood (the fox) is disguised as a bird and attempting to shoot some targets in a competition to win a kiss from Maid Marianne (if not, go back and watch the movie – it’s awesome). His opponent, the sheriff of Nottingham, shoots and the target jumps up and gets in the way of the arrow, making a bullseye. This shifted goal post allowed a less competent archer to get a successful conclusion. In this example, the person moving the target is incapacitated so that Robin can shoot with skill alone, however if the person had not been incapacitated, they could have shifted the goal out of the way of the successful shot, creating a miss.

As a logical fallacy, the target can be moved closer for a poorly constructed experiment or argument to succeed where it shouldn’t (such as for the sheriff of Nottingham), or the target can be moved further away, to make a normally successful experiment or argument fail. This tactic is often used by proponents of pseudoscience, allowing their science to seem legitimate by using close goal posts, that is poor criteria, but then creating overly strict and stringent criteria for legitimate science to make it look faulty. Properly applied science uses the same criteria for all rather than shifting the rules. Take a look at the Logical Fallacy : Inconsistent Rules – Goose and the Gander.

An example from the “Supplement Industry” of this is defining their product as a diet supplement rather than a medicine, even though they make very medicine like claims [http://archinte.jamanetwork.com/article.aspx?articleid=647749]. The advertising targets legitimate medication as having nasty side effects and being full of chemicals, while the supplement itself does not have the advertised ingredient (1/3 in the USA), contain contaminants (1/5 in the USA) and no one really knows what the side effects are of the supplement, because there is no requirement to test them (according to the FDA in the USA) [http://theness.com/neurologicablog/index.php/whats-in-your-herbal-supplement]. By comparing the two as if they followed the same criteria yet the goal posts are shifted because medication must meet certain criteria such as testing, field trials, tracking, random checks etc, while supplements require none so can claim whatever they like without the requirement or burden of proof.

Non-Sequitur

Literally this means not in sequence, which is usually written as “does not follow”. This fallacy has a conclusion that is not connected to the premise. There are several types of disconnect – common, undistributed middle, affirming the consequent, denying the antecedent, affirming a disjunct and denying a conjunct. The commonality of all of these is that the argument A is not properly related to the conclusion C, thus C in not valid, or that assuming C cannot give validity to A.

Common Non-Sequitur

This is simply where one thing has nothing to do with the other.

“The apple in the fridge is red, so the bee cannot pollinate the flower.” This can commonly be found being spouted by ‘gurus’ who are pretending to have depth, or by people who really do not understand how things are connected and attempt to make a connection that should clearly not be made.

  • Affirming the Consequent

This non-sequitur assumes a bi-directionality of the consequences – what goes one way must go the other. On the surface this seems reasonable, yet when you consider how this looks in a Venn Diagram, where all of one circle is inside another, you can clearly see that this is not true.

Venn Diagram used to understand non-sequitur’s

Venn Diagram used to understand non-sequitur’s

Here we have two categories, Red A and Green B. Let us say that A is animals with a vertebrae (back bone/spine), and let us say that B is humans. All humans have a vertebrae (that is all of B is a member of A). Daisy has a vertebrae, therefore Daisy is a human. This seems true and accurate, yet when we realise that Daisy is a cow, we realise the mistake. Not all creatures with vertebrae are human.

Denying the Antecedent

Another bi-directional error is denying the antecedent (first part) because the consequent is false.

Consider our diagram above – Green B is now people who have brown skin and Red A is people who have brown eyes.

The argument is this “If I have brown skin, then I have brown eyes” – this may be quite true. The non-sequitur is to then say “I do not have brown skin, therefore I do not have brown eyes”. This does not follow, because having brown eyes does not require you to have brown skin, even though (in this example) having brown skin does require you to have brown eyes.

Another way to look at this is mathematically: If Alpha is true, then Beta is true. Beta is not true, therefore Alpha is not true. The second statement was not defined by the first statement, so it is not necessarily true – Beta can be false and Alpha can still be true.

Affirming a Disjunct

This is an error in understanding the meaning of the word “or”. This error is usually cleared up in programming by the use of “or” which is inclusive, or “xor” which is exclusive.

Let us have two items A and B. If the statement is “A or B is true” and we are inclusive, then:

  • A is true and B is false = True
  • A is false and B is true = True
  • A is true and B is true = TRUE
  • A is false and B is false = False

Where as, if the “or” is exclusive (or Xor), then

  • A is true and B is false = True
  • A is false and B is true = True
  • A is true and B is true = FALSE
  • A is false and B is false = False

Note the difference in the third phrase. By Affirming the Disjunct one is using an exclusive form of the “or” when an inclusive version is expected. This can also be seen as a false dichotomy, trying to create only one true answer.

So let us get rid of the maths and go with English.

“I am nice or male – I am male therefore I am not nice” – the or is ‘inclusive’, but the argument is using an exclusive version.

vs

“I am either male or I am female – I am male therefore I am not female” – the ‘either’ is exclusive, so the statement is valid. I know that some people are defined as neither or both, yet on government forms in Australia, the logic is you are either male or female, and you must tick one, not both nor neither.

Denying a Conjunct

In this case, the statement is that both statement A and B cannot be true (exclusive or). The following statement “A is false therefore B must be true” is in error because B can still be false (check the chart above for inclusive or / exclusive or : or/xor).

An example of this could be “It is not the case that I am in a lake and at home”. This can be useful. The follow up statement “I am not in a lake, therefore I must be at home” is in error because I may be in an aeroplane, or driving. The statement is only useful if one of the statements is true, thus if I am in a lake, I am definitely not at home, and if I am at home, I am definitely not in a lake. I cannot, however, conclude that because I am not in a lake I must be at home.

Post hoc and Post hoc ergo propter hoc

Post-hoc ergo propter hoc (latin: literally “after this, therefore because of this”), often shortened to post-hoc, is an error in causality. Causality is a relationship of two or more events with a directional component, where one event causes the following event. This fallacy can be bidirectional, mistaking avoiding the first event as a valid way to avoid the second. It is reinforced by a poor grasp of which events are causal and which aren’t.

In a causal relationship, one event causes the next event. There is no wriggle room here, event A must be followed by event B so long as an external factor does not interfere with the usual occurrence. The two events are separated in time, so cannot be simultaneous. A good example of a causal relationship is jumping and landing anywhere on Earth. If I jump (defined by lifting the entirety of your own body off the ground), then you will land so long as this is done anywhere on Earth and nothing prevents your landing (such as a cable, or someone catching you, upwards blowing air etc). The events must be sequential and proximal (connected in some way).

Post-hoc ergo propter hoc mistakes a causal relationship as demonstrated above, with two sequential events that do not have a relationship.

Two very similar sequences can demonstrate the difference between the two:

  • I have diabetes. I take the proscribed doseage of insulin and my blood sugar level stabilises.
  • I have a headache. I take a homeopathic remedy and my headache goes away.

On the surface these two sequences look the same, Event A was followed by Event B, which ended with result R.

A + B -> R

If the first example did not have event B, then result R would not occur (aside from another specific intervention). In the second example, denying event B (the homoeopathic remedy) still ends in result R.

Eg 1 : A -/> R (Event A does not lead to result R with the absence of B)

Eg 2: A -> R (Event A still leads to result R with the absence of B)

Reversing post-hoc ergo propter hoc can lead to some strange thinking, such as having to turn the door handle three times when locking it to ensure the house is not broken into. After all, in all the time I have been doing that, the house has not been broken into… so it must work, right? Here the mistake is to erroneously link Event A to Result R, thus believing that removing A will avoid R.

Turning the lock once was followed by a break in, thus turning it once causes a break in. Since then I have turned the lock three times and I have never been broken into, so it must work, right? Wrong, there is no relationship between Event A and Result R, merely a one of coincidence.

To avoid the post-hoc ergo propter hoc error, check to see if any research has been done to concretely link the two phenomena, or to demonstrate that it has not been linked. If your only evidence is “fringe” then it is probably wrong, follow the mainstream evidence (where fringe is defined as “scientist defies/’speaks out against’ the mainstream”, or “X with no scientific education is turning science on its head”). Remember folks, extraordinary claims require extraordinary amounts of evidence to support it. Or the only evidence is done outside of peer reviewed articles, or the evidence relies on testimonials and anecdotes.

Reductio ad absurdum

This fallacy translates from Latin to “reduction to absurdity”, but comes from the Greek “eis atopon apagoge” which means “reduction to the impossible”. If you reduce an argument too far, it becomes absurd, or blatantly wrong. Clearly any argument can be rendered in a similar light, so using this technique to demonstrate that an argument is true or false is illogical. If an argument has an absurd conclusion to prove the premise, the odds are the arguer has reduced the argument to absurdity.

There are three flavours of this logical fallacy.

Undeniable:

Consider these two arguments:

  • If the anvil had no weight, it would rise up and float away.
  • The Bible is the word of God, so it cannot be corrupted by man.

The first argument seems reasonable as a massless anvil would be buoyant and would potentially float. As it does not do so the assertion of mass in the anvil must be true.

The second argument also seems reasonable, if the Christian Bible is the word of the Christian God, then it cannot be corrupted by man. Yet closer examination will show that this statement is actually quite in error. Firstly, there is no evidence that the bible is the word of god, especially given all of the varying versions, interpretations, errors and contradictions found in the bible. This does not stop this argument being used though.

Untenable result:

Consider these two arguments

  • Without rules, society would become chaos
  • Without religion, humans would have no morality

Both arguments seems similar and perhaps true. A society without rules would seem quite chaotic, yet when instances of this have occurred, the chaos is often brief before some kind of rule set becomes imposed by the people themselves. Even in the chaos, some sets of rules can be found. While not necessarily true in all cases, the first argument seems reasonable as a general rule of thumb.

The second argument appears to be basically the same thing, yet morality has been demonstrated to be irrelevant to religion and belief systems. Examples of theists and a-theists are available demonstrating both moral and amoral behaviour.

Proof by contradiction:

Consider this statement

  • There is no smallest positive rational number, because if there were, it could be divided by two to get a smaller one (taken from the Wikipedia example)

This argument relies on the inability to contradict the premise. If you can, contradict the argument, then the argument is false. The problem is this relies on everything being true or false and negates fuzzy logic and alternative measuring systems. When does a table become a stool, or a stool a table? Does it have to be one or the other, is hybridisation possible, or does function define the form?

Common Examples from Both Sides:

Here are two common absurd examples created by over reduction.

Evolution

Evolution is quite a large and complex concept. It is frequently reduced to a digestible level to give the basic idea to the lay person. Pretty much every lay person gets that basic idea:

A while ago humans came from primates, a long time ago we came from slime. Life changes. Changes that add strength survive, the others die out.

It’s a nice, tidy concept. Of course, the above is ridiculously simplified. People spend their entire careers studying and refining this concept in many different fields of study.

The fallacy comes in when this reduced concept is then used to attack and defend the complex idea. At this point, the reduced concept become an absurd argument.

Climate Change or Global Warming

Like Evolution, Global Warming (which is a more accurate description) is a big and complicated concept that is simplified for the lay person. Most lay people get this basic description:

The gradual additional energy the Earth is retaining due to additional anthropogenic (man caused) and natural phenomena increasing the so called “Greenhouse Gases” (blanket gases), often simplified to CO2 (Carbon Dioxide).

Again, it is a nice and tidy concept. When it falls down is when this reduced concept is used to try to attack and defend the full complex concept developed by thousands of scientists using millions and possibly billions of points of data to explain the never before done experiment of what happens when we humans change the ratio of energy into our planet from the sun to energy out via pollutants. At this point, the reduced concept become an absurd argument.

Slippery Slope

When a chain of logic leads to an extreme scenario, it is referred to as a slippery slope, such as slowly tipping an object off the edge of a plateau and it will slide down to the bottom under its own power. Frequently the chain of logic is tenuous and the outcome undesirable, leading fear of the initial step or steps with the result that no steps are taken. When the chain of logic is not tenuous it is a useful tool, but a tool that is rarely used.

When used erroneously, the slippery slope logic becomes a fallacy simply because the chain of logic between the first action to the last action does not hold up. When used correctly, the slippery slope argument can be used to demonstrate reasonable dire consequences, but only a few steps along. If the number of steps is too great (perhaps 5 or more), then the complexity of each interacting event becomes too great to give any reliability to the outcome. This is generally the error introduced by the slippery slope logical fallacy.

Additionally the outcome is generally exaggerated and dire. This evokes a fear or repulsive emotion in the recipient of the argument, distorting their value system when it comes to evaluating the validity of each step in the slippery slope.

There are two main forms of the slippery slope logical fallacy, implied and explicit. With the implied slippery slope logical fallacy, the steps between the initial event and the dire outcome is implied and not specifically identified. For example, drinking alcohol leads you to become an alcoholic and therefore you will kill yourself and your friends in a horrible drunken driving accident, so don’t take that first drink! In some instances this may even turn out to be correct, yet in the vast majority of situations, not only do people not become alcoholics (in the medical sense), but also relatively few drunk people are involved in horrific car accidents, especially that kill their friends. The dire consequences evoke a fear of the outcome, prompting you to overvalue the tenuous chain of logic leading to this outcome. Many people drink responsibly, invalidating this fear driven slippery slope argument.

The explicit version of this logical fallacy lists all of the necessary steps in the chain, and each step can seem feasible, even the outcome can seem feasible, but the slippery slope mechanism overstates the likelihood of the outcome. In a mechanistic universe a long chain of actions and reactions can indeed lead to a predictable outcome (Laplaze Machine, or Demon), but in a chaotic probability driven universe (like ours), the next steps become less and less predictable. An example of a highly predictable mechanistic model can be demonstrated by use of a gravity shoot releasing a billiard ball at a set angle and speed onto a billiard table such that the ball bounces off 3 walls and goes into a pocket. Works every time. Yet if we organise for 20 or 30 bounces, even this model begins to break down. More realistically speaking, crumple a soda can and place it on a table. Slowly push that soda can off the table until it falls. Now mark the place where it ends up (stops moving). Put the soda can back on the table and slowly push it off again. It won’t land where the last one did. Heck, it won’t even tip off the table the same way.

When dealing with life forms, the soda can example above is far more like reality than the billiard ball example. To translate choice into the first billiard ball example, each step (bounce of the ball on the billiard table) has a number of angles it can now bounce off into, depending on the chooser, which will change each progressive bounce such that the final pocket (if any) the billiard ball ends up in is far different to the preceding and next one.

If chains of events are so tenuous, how does science allow us to make predictions at all? Shouldn’t we just give up? Doesn’t that just debunk the whole point of everything?

No. This is why – the study of nature is far more probabilistic than not. In general, things happen roughly the same, yet each event is different. Consider our soda can example above. The exact tipping point depends on a number of factors – the orientation of the crumpled soda can, the speed at which it is pushed, wind currents, the temperature of the soda can, the surface, the air and so on. If we know this, we then need to contend with the air currents on the way down, the rotational spin imparted on the crumpled soda can as it falls off the edge and so on. Eventually it will strike a spot on the ground and bounce a few times. Where it strikes the ground, how it strikes the ground, velocity of the soda can, angular momentum of the soda can and the type of material of the ground will all play a part in defining where and how the soda can will bounce. Each successive bounce will have a similar set of calculations. If we know enough… we can actually mostly work out where the soda can will land, much like the billiard balls. Yet each push off the edge will be a new problem – a unique problem. The repeated problem, though, has predictable components. The soda can will slide on the table, it will fall, it will land and bounce, it will end up somewhere.

So lets do this experiment 100 times. We will discover that the final location of the crumpled soda can has a high value close to the landing zone, petering out to a low value of locations far away from the landing zone (by value, we mean probability of landing). There will be a boundary of maximum landing that the crumpled soda can will not go past, but it will be sparsely populated with landings, while the closer in section will have a higher population of landings. The initial impact zone will have a smaller version of the final landing zone. The fall location at the edge of the table will have a similarly smaller location. Each step from the push to the final resting location becomes less predictable, but still has a level of prediction. On average, the location can be predicted, the likely outcome known. If we changed the table and floor to a set of stairs, the complexity goes up, but we can still work out the area that the crumpled soda can will rest in rather than the specific location. That area is highly predictable, even though the stairs may be 100 steps long. Often in science, the area of final rest is the outcome we are after, rather than the specific location this time.

If we introduce people walking up the stairs, we can’t easily predict if a person will step on the can, kick it, pick it up and so on. The variables have made it too hard to predict.

Using the knowledge of Newtonian Physics (which are superseded by Einstein’s laws, but are still pretty much good enough for nearby astronomy) satellites are launched from Earth, spun around gravity wells (such as moons and planets) and whizzed off with high precision all over the solar system. The calculations for this are on the one hand monstrous, yet also on the other hand elegant. Also most satellites have a fudge factor built into them – a slight miss can be realigned en-route and corrected for.

In your computers central processing unit (CPU) is error correction, so when the high speed electrons misbehave and do some of that quantum stuff, the non-average result is detected and adjusted. This happens remarkably fast, all things considered allowing you to read this from the web. If these weren’t corrected for, we would be back in the mechanical calculator days. We only need this error correction because of the precision we are demanding from our electronics. Simpler electronics don’t need error correction, because they rely less on the accuracy of the result.

One may think, at this point, that the slippery slope kind of seems reasonable. If the arguer were to take enough factors into consideration in their prediction of results, then they would be. The problem is the arguer generally hasn’t, so they are making value laden decisions about the next step which are faulty. It is like predicting that the crumpled soda can will land at a particular extreme position. It may do so once, but the odds of a repeat are very unlikely, yet the arguer is suggesting that this will happen every time, or at least the consequences are so high that we can’t risk pushing the can off the edge of the table. That is the error – that the odds are overstated because the feared consequences are so extreme.

Special Pleading

When a faulty argument is noted and objected to, the insertion of fallacious arguments to shore up the falling argument or the subtraction of counter evidence is known as Special Pleading. This is known by several different names – Ad-hoc reasoning, Stacking the Deck, Ignoring the Counter Evidence, Slanting and the One Sided Assessment. It can be seen also by it’s reverse – The God of the Gaps.

There are three variants to this logical fallacy

Additive Special Pleading, or Ad-hoc Reasoning

Ad-hoc literally translates from Latin to English as “and this”, but is used to say “tack this on as well”. These additional explanations are tacked on to the original explanation as an afterthought add-on to fix the fault in the original infrastructure of the argument. It fails.

For Example:

“Clairvoyance has been demonstrated with large audiences, where the psychic was able to demonstrate knowing things they couldn’t possibly know”

‘Yet when we test them in the laboratory, they score about the same as a random guess’

“That is because the laboratory environment is counter conducive to psychic phenomenon.”

The last statement adds a spurious explanation as to why the counter evidence is faulty. It generally has a flaw in the logic, evidence or makes an untestable claim.

Subtractive Special Pleading

Stacking the Deck, Ignoring the Counter Evidence, Slanting and the One Sided Argument quite literally refers to ignoring evidence that counters the argument made, dismissing vital evidence rather than addressing it. In essence, subtracting from the discussion.

Example 1 –

“All the apples in my orchard are red”

‘what about the green apple tree over there?’

“Ignore that tree, because all of the apples in my orchard are red”.

Example 2 –

“Homeopathy clearly works because all of the testimonies telling us it is so”

‘Yet all of the double blind scientific tests have shown no effect beyond that which can be explained by a placebo – which is to say it is the same as taking nothing’

“Those scientists have an agenda, but here, read these testimonies”

The last statement ignores the supplied counter evidence, dismissing it out of hand and refers back to the first statement as if the objection had never happened, subtracting a point that should be addressed, but isn’t.

God of the Gaps

This is an argument style that requires further and further details to be provided such that there is no way to provide evidence of each gap in the pattern. I call this a reverse of the Special Pleading because it asks the scientist to provide add-on evidence to make the rejected argument better – especially when the argument doesn’t need it. The God of the Gaps technique is to make the argument look bad by asking for evidence that is not needed, and then dismissing the argument when the evidence is not located, or the arguer attempts to educate the individual on how the scientific method works.

Electromagnetic radiation is an excellent example of a spectrum. Look at the bit between red and orange light, and you find a frequency of light all on it’s own (orangey red). Halve the distance between that and your “red” light from earlier, and you will find a frequency between the two (reddish orange). Keep going and you will never stop until you reach the limit of your equipment – but never the limit of the spectrum.

Scientific evidence, if I can use such a misleading term, is often not discovering a perfect spectrum like this. It is finding enough data points that a pattern can be inferred. This inferred pattern is then used to make prediction, which are tested for and if confirmed, adds a level of validity to the inferred pattern, which can be represented as a line or spectrum. Enough confirmation and the pattern is assumed correct until sufficient evidence is found that indicates the pattern is faulty and the spectrum is actually somewhere else. This mechanism has shifted our notions of cosmology from flat Earth, to Terra Centric, to Helio Centric, to Galactic to the Big Bang and from Stuff Falls ya know, to Newtonian Physics, to Einstein’s Relativity and who knows what is next. The new evidence doesn’t delete the previous knowledge, it adds to it, even when the old knowledge was specifically wrong, but generally right.

The God of the Gaps is an attempt to force a scientist to create Special Pleading for the gaps in the evidence that complete the spectrum. If the scientist fails to do so, then the arguer says “aha – it can’t be what you describe, so it must be mystic beings/magic/aliens!”

Mostly what this indicates is that the arguer does not understand the principle being debated, or the methods of science. Unfortunately people who tend to use this fallacious technique do not want to learn the methods of science, or the complexity behind the principle, they just want to justify their ignorance and false belief.

A poorly created idea should be tested to see if it stands up. When an idea evolves into a Theory, or is now commonly recognised as a de facto truth by scientists in different fields, it is no longer an idea that should be “tested” by this methodology, especially by the lay person. There is quite an arrogance to the person who says “I have read a bit on the internet about this, and I think thousands of scientists, specialists and researches along with all of their tests are wrong”. Now if that person is a specialist in the field and finds an exception to the Theory and tests for it, publish away, raise a great cry and hue, for others will also want to test your findings and if it is right, you may be the author that updates the entire field. Nobel prizes are given for those who manage this. Science updates all of the time when new findings change how we perceive things.

Straw Man

This fallacy is more of an informal fallacy rather than a logical fallacy per se. It travels under several different names beyond the Straw Man: Man [Men] of Straw, Aunt Sally and the Scare Crow tactic. The basic idea is to replace the argument you cannot defeat with a distorted but similar one that you can, then defeat that and imply that the original argument is similarly defeated. To combat this, bring the discussion back to the original undistorted point made in context.

The fallacy uses this format:

Person A makes point P

Person B does not address point P, but substitutes it with substitute S

Person B defeats substitute S, thus arguing or implying that point P is similarly flawed.

Several forms of this fallacy exist. Each have to do with the manner of substitute S above.

Misrepresenting the Opponent’s Position

This is a straight substitution of the point P with a similar but flawed substitute S. If the substitute does not fit into the below categories, it is this version.

Alex asserts “When ‘acupuncture’ has been tested in the laboratory, it has been demonstrated to have no greater healing benefit compared to random needle stabbing.”

Bruce counters “Scientists undervalue the ability of Eastern Medicine because it doesn’t fit within the Western scientific understanding.”

Alex, has made a statement, which Bruce, has not responded to, but has made an alternate version of Alex’s statement which Bruce feels more comfortable addressing. The implication is that Bruce has addressed Alex’s point, but actually has not. Bruce is also begging the questions, implying that Easter Medicine is valid without proof.

Misquoting or Quoting Out of Context

This version uses the substitute based on the original argument from Person A, but takes it out of context, or subverts its meaning, creating an easier target (Straw Target) that supports the argument of Person B.

Amanda states “The idea that Christian Bible is a historical record, consider that it states there was a massive world wide flood, yet geological records and palaeontological records do not find any evidence for a world wide flood.”

Bettina responds “The bible accurately describes the kings and Roman emperor at the time of Christ, clearly demonstrating that it is an accurate historical document.”

Here Amanda has asserted an inaccuracy in the Christian Bible verses geological and palaeontological evidence, clearly invalidating the claim that the entire Bible cannot be seen as an accurate historical record. Bettina has taken part of this assertion – historical accuracy, substituted her own version and proven that to her own satisfaction, which does not actually address the original point – the world wide flood has no evidence at all, thus the entire Christian Bible is not reliable as a historical record. Bettina has only demonstrated that some parts of the Christian Bible align with history, and thus implies that the rest of the text does too.

Poor Defender

In this version of the Straw Man, Person B uses an external person who failed to prove a point as the defender of Person A’s argument. Since this external person, the Straw Man, was unable to prove the idea, Person B will assert the idea if flawed and thus the idea should be ignored. It is considered a Straw Man argument because Person B is not arguing the merits of point A, but is attacking a weakened person in substitute S instead.

Adam asserts “Information cannot be created or destroyed in this universe, it can only be transformed, thus black holes cannot consume matter and sequester it forever.”

Bianca responds with “Stephen Hawking stated that matter could not escape the event horizon of a black hole, and then later said that black holes don’t really exist. If he can’t work it out, then clearly the idea is faulty”.

Here Bianca is responding to Adams statement by appealing to Authority (Stephen Hawking), who changed his mind on the specific effect of a black hole on matter. By attacking Hawking, Bianca has not actually addressed the assertion from Adam.

Interestingly, Stephen Hawking didn’t say that black holes don’t exist, he said that the information (matter) going into a black hole is not deleted or sequestered as he originally proposed, but rather was scrambled and ejected in an alternate format from its original state. This was then misquoted by mass media to suggest that Hawking didn’t think that black holes existed. So her attack on the Straw Man (Hawking), in this case demonstrates her ignorance rather than being a successful attack. Even if she was right, she still didn’t address the original assertion from Adam, but substituted her own preferred argument to defeat.

Misrepresenting the Person (Ad Hominem)

By misrepresenting Person A as part of a group, Person B is able to attack the group identity, thus attempting to discredit Person A without actually attacking their point. This is a variant of the Ad Hominem logical fallacy – an attack against the person instead of the point.

Alan asserts “A persons gender has nothing to do with their chromosomes or sexuality, it has to do with a sense of identity and representation.”

Barbara responds with “Here you are, a middle class, middle aged, white male talking about gender and sexuality. Your entire group have no idea what it is like to be a minority.”

In this example, Barbara has defined Alan as part of a group and then attacked the group. Alan may be male, may be white, may be middle classed and even middle aged, or Alan may actually be a cross gendered person who was originally female, or come from a middle Eastern European Background, may have minority group involvement and so on. It really doesn’t matter. The point is that Barbara didn’t respond to Alan’s point, she instead attacked the group that she purported that Alan was a member of in order to dismiss what Alan said.

Over Simplification to the Point of Fallacy (Reductio ad absurdum)

Some concepts have a huge amount of complexity in them. A reduced version may be used to make a point, however a reduced version can be abused to make an erroneous point too. The Straw Man version of this is to ignore the point made by Person A and substitute it with a distorted reduced version of the concept and attack that instead, implying a fault in the original point.

Abigail states “Evolution is a continual process of variation, some successful, some not. If the variation has a net gain in adaptation, it will be the winning formula that will fill the niche, those variations that fail to give net gain will decrease in population and thus be replaced by those with more successful variations.”

Barney counters with “Evolution promotes the survival of the fittest, so if white people are fitter to survive than black people, then it isn’t genocide, it is evolution.”

On the surface, Barney’s point mimics that of Abigail’s yet has a horrid conclusion. Closer examination shows that Barney has reduced Abigail’s reduced but useful concept of evolution, allowing Barney to make the concept seem implausible, yet hasn’t actually addressed the point that Abigail makes, regarding variation within the species and the mechanism of adaptation.

Straw Man arguments are very popular amongst politicians who are dodging insightful questions and arguments. Most journalists try to negate this by asking again the original question and trying to pin the politician down to answering that point. Of course the politician is aware that the journalist has a time limit and can’t continue forever so must inevitably concede the issue and move on. Similarly if you find yourself victim to this, bring the discussion back to the main point and resist moving on until you have either asserted the person can’t address it, thus is may have legitimacy, or the person does address it.

Conclusion

Take a look at the argument being made by Person B and compare the argument to the original point – is Person B actually addressing Person A’s point at all, or something similar but different? If a Straw Man fallacy, bring the discussion back to Person A’s point.

Tautology

The term “tautology” comes from the Greek “tauto” which means ‘same’ and “logos” meaning ‘idea’. That is, it is the same idea. It is used in formal logical as a structure to demonstrate that one thing is the same as another thing by another name. That is, A = B. In rhetoric a tautology is used to intimate that a definition is being supported when actually the same statement is made twice, just in a different combination, as if one can prove the other. This can be very handy when solving sums, but in an argument nothing is gained when a tautology is used to prove itself.

If our statement A is a formula such as 25+6x and statement B is a formula such as x+30, then we can manipulate the components and discover that x = 1. That is, x is 1 and 1 is x, and indeed from the beginning 25+6x is indeed x+30. The benefit of these seemingly circular statements is that we receive a clarity in the definition of x, which was previously undefined.

If we misuse this, then we would start with 25+6x is true because it is x+30, thus we can conclude that 25+6x is true. We have established no new information, we have just repeated statements, even though 25+6x looks different to x+30, because we know that one is the other, we haven’t gained.

Here is the paragraph above using sentences instead. “Jack has a fever, and his temperature is raised. Therefore Jack has a fever.” While ‘Jack has a fever’ and ‘his temperature is raised’ are written differently, the information is the same, so this tautology gives no further insight into Jack or his fever. To then go back and say ‘therefore Jack has a fever’ is to more or less say ‘the apple is red, and the apple is red, thus the apple is red’. Nothing gained.

Another error that can occur here is that because A = B, and B is A, we can erroneously assume that A must be true, after all B is equal to it. That is like saying “Magic is a mysterious force that cannot be found by science, magic hasn’t been found by science, so it must be real”, or “My dog flies when no one looks at it, and so long as people don’t look at the dog it is able to fly, therefore my dog flies.” The error equalling the same error does not mean the error is true.

A tautology is related to a circular argument, in that there is a repetition of ideas instead of a progression of ideas. However they are two very distinct logical fallacies.