Neuro-Linguistic Programming (NLP) is a metadiscipline to chart excellent human behavior elegantly. For example, in the fields of education and health NLP practitioners have developed an extensive program integrating NLP principles with field specific best practices. One field is missing auspiciously though: science. This is easy to understand as NLP principles and the principles of science are hard to combine. The most obvious difference is in regard to cause and effect. Almost all scientists want science to be about cause and effect, whereas NLP practitioners consider cause and effect to be a distortion of reality.
Given this background we took it upon ourselves to see whether it would be possible to apply NLP principles to science. This endeavour turned out to be more beneficial than we hoped for. For as it turns out the core of science, statistics, is better off being updated with something better. Something that, as it happens, falls more into line with NLP. The current school of statistical thought is so dominant that most scientists don’t even realize that what they consider to be statistics is in reality the frequentist school of statistics.
There is so much wrong with frequentism that almost all philosophers agree that the alternative way of doing statistics, Bayesianism, is the way to go. Philosophers hardly ever agree on anything, so it is amazing that they are so unanimous about statistics. Unfortunately, frequentism is so deeply rooted in the scientific community that almost no scientist abandons frequentism in favor of Bayesianism, often complaining that Bayesianism lacks the software tools for easy use. Which, when you think about it, doesn’t sound all that scientific.
There are three major mistakes in frequentism;
-
The definition of probability is circular as frequents defines probability as the frequency of equally probable events. You immediately see the use of „probable” in the definition of probability, which is a philosophical double negation.
-
Frequentism thinks that probability is of a natural kind, something that scientists discover in nature. This amounts to thinking that nature has mathematical properties instead of properties that we can describe mathematically.
-
Frequency often differs from what we intuitively gather to be the right probability. Take coin flipping for example. The frequency is almost never exactly 50%. Frequentism tries to solve this major error by claiming that the true frequency is found when you have an infinite number of events. But this too is a false argument for infinity is so big, making it possible that, for instance in the case of coin flipping, an infinite number of heads comes up first before the infinite number of tails. The probability of tails would in that case be 0%, far removed from our intuition that the probability would be 50%.
For these and other reasons, frequentism fails badly. Luckily, that alternative, Bayesianism, has none of these failures. For Bayesianism the definition of probability is that probability is the measure of your uncertainty. Probability doesn’t exist. It is only a measure of how we quantify
bets that we are willing to make about unknown events. Given that the information we have changes over time, our probability estimations also change over time.
There are many forms of Bayesianism but they all have problems of their own except one of them: subjective Bayesianism. The only draw-back of subjective Bayesianism is that is infuriates scientists. The reason for this is that it does away with the notion of science as objective knowledge and replaces that picture with the idea that there is only the scientific opinion of individual scientists. Philosophically speaking, it is the best form of statistics there is, almost all philosophers agree. Scientifically, it is making huge strides. The Stanford Encyclopedia for Philosophy says:
The combination of its precise formal apparatus and its novel pragmatic self-defeat test for justification makes Bayesian epistemology one of the most important developments in epistemology in the 20th century, and one of the most promising avenues for further progress in epistemology in the 21st century.
[http://plato.stanford.edu/entries/epistemology-bayesian/
Subjective Bayesianism and NLP form a match made in heaven. Like NLP, subjective Bayesianism is subjective. Its core value is freedom and there is no cause and effect in subjective
Bayesianism. There are many other links between NLP and subjective Bayesianism that we are exploring in a book we are currently working on. Although subjectivism, freedom and lack of cause and effect may go sound scientific, it is important to note that the freedom one has within subjective Bayesianism allows one to take into account all statistical data that frequentism has produced so far. Yet it also allows one to divert from their statistical results for whatever reason there may be. One may find for instance that the errors of frequentism are limiting the reliability of frequentist data sets.
For all these reasons the proto-scientific research that we are doing uses subjective Bayesianism. As this offers a completely different perspective from frequentism, it would be an advantage to read up on how to interpret subjective Bayesian data sets.