A Trade-off Between Simplicity & The Reality

In spite of being originated from medieval philosophy, the law of parsimony which famously goes as Ockham’s Razor still remains practical in the modern times of AI and the pursuit of artificial general intelligence. Ockham’s Razor asks to cut all the unnecessary things while understanding any system to reduce complexity. This idea is a part of creating efficient ML algorithms. The tool of parsimony has its limitations too and these limitations can create an objective fake picture of the reality, and can be used to twist the facts.
People most of the times miss the point of parsimony which is to make a realistic attempt to check how and why our understanding of things which we have and the real nature of things differ, how can we fill the gap between what we theorize, what we can test and what real there exists.
Context thus plays very important role in every pursuit of knowledge, even in the knowledge of the self. It is important to understand the boundary conditions of our knowledge. One should know where their beliefs (even if they are true) can be limited, can be challenged, can be difficult to prove. That is why parsimony in any pursuit of knowledge needs to be handled with utmost care while studying the real nature of things.

Medieval Idea of Ockham’s Razor For The Modern World

Craving for Simplicity

One of the key driving factors for humans is to have complete understanding of how things work. The reason behind this is to maximize the chances of survival. Now in modern times those odds have become better. The urge to understand the working of things has been evolved into improving the quality of the survival or the existence.

The key events in the quest to understand everything that is there could be summarized as follows:

  1. There is some unexpected event which causes pain, suffering, loss (it can be opposite too, like extreme favorable growth, happiness, high gains. But the human tendency is to be more concerned about uncertain losses.)

Curiosity actually emerges from the urge to control everything that can be controlled and identifying what cannot be controlled and then working towards how to control uncontrollable things by understanding them in depth.

This is how we try to assign meaning to life, our very existence.  

  1. Then we try to observe similar events associated with such experiences, record them. We try to recreate them until we have clarity on the factors which are responsible for such events. We experiment the events repeatedly so that we can have a proper theoretical understanding or a concrete reasoning behind such events.
  2. The key factor for the reasoning to be accepted as the practical one is its consistency with another unconnected or remotely connected events. There is some “universality” in that reason or that theory.

This is roughly how we try to understand the existence. If one asks why we are always on the quest of understanding the existence the answers are diverse.

The simple answer I think is that our brain prefers simplicity so that it can spend the saved energy to maximize its chances of survival. Our brain hates complexity because once the complexity is accepted the uncertainty has to be accepted and then the brain would start to invest its energy into those things which won’t even get materialized but could get materialized because of the non-zero probability.       

Our brain craves certainty of survival.

This trait of brain to prefer simplicity might not be the nature of the reality in which it exists and tries to survive but if doing so maximizes the chances of its existence then it is a pretty much the best way.

In epistemology, the philosophy – the theory of knowledge this trait is investigated in depth. We will try to see one dimension of this thought which goes popularly as the law of parsimony and even more famously as Ockham’s Razor   

William of Ockham and Ockham’s Razor

William of Ockham was an important medieval philosopher, theologian who brought the law of parsimony into the focus. Although the idea was already in existence from Aristotle.

Aristotle believed that nature always works in efficient ways possible and thus the explanation for the events in nature ought to be the efficient ones.

Although Medieval, Ockham’s razor is one crucial idea in the age of Artificial Intelligence and Machine Learning.

Ockham’s Razor emerges from his writing called “Summa Totius Logicae” exactly as:

“Pluralitas non est ponenda sine necessitate” meaning “Plurality should not be posited without necessity”.

In modern Science, philosophy, the idea “simply” goes like this:

“Do not mix unnecessary things”

OR

“All things being equal the simplest solution is the best.”

Consequences of Ockham’s Razor

The principle of parsimony (lex parsimoniae in Latin) thereby Ockham’s Razor helps us to not complicate things when we are investigating them. It is used as a thumb rule or heuristic to generate theories having better predictability. The moment we are saying that the preferences should be to ‘the simpler theory with better predictability’ is the moment when people most of the times misinterpret the Ockham’s razor. Razor implying that chopping off everything unnecessary, if not chopped would contribute to the increase in the complexity thereby compromising the predictability. We will see how Ockham’s Razor affects positively and negatively when we are trying to understand the things around us.

Good consequences:

Search for the theory of everything

Aristotle’s belief that nature always chooses the efficient route to decide the fate of anything reinforced the idea that the theories which would explain the nature are the best if they involve the least possible variables.

Einstein’s theory of relativity and the equation of energy connecting to the mass is the best example to explain this. An elegant equation with mere 1 inch length encompasses all the big secrets of the universe.

The theory of relativity is elegant in a way that it covers Newton’s understanding of motion and gravity and furthermore extends it to the understanding of the black holes where Newton’s same theory would become limited.

Quantum mechanics explains everything that atom can create. It justifies why the earlier models of atoms were perceived in those particular ways (like atom being a solid sphere, a plum pudding, a thing with nucleus at center and electrons in their orbits).

Quantum mechanics thus is the most efficient way to explain what we observed and why we interpreted those observations in a particular way. Please note that the goal is not to falsify something, prove something wrong; the goal of knowledge or science is to understand why we theorized something in wrong way and why it doesn’t align with the reality we are trying to observe and understand.

Efficient Machine Learning Models – Generalization Error

In the age of AI, the efficiency of Machine Learning Algorithms is one crucial decision maker of the investments to be made to evolve it further. The key goal of any Machine Learning algorithm is to create a mathematical equation (or layers of mathematical equations) which would understand the data provided, make sense of it and now predict the outcomes after understanding the data.

This sounds simple while establishing theoretically but the real-life data one provides ML algorithms is filled with variety of noises – unwanted, unconnected, irrelevant data points.

If the ML algorithm would try to fit the noise too, it would add too many variables in its mathematical equations. Now the model would fit each and every data point but at the same point it loses confidence to predict the outcomes because the noise is not really connected to the system one is trying to define.

That is why a complex ML algorithm fitting all the data points (R2=1) is an ideal situation – ideal meaning practically impossible because it is exposed to a very limited dataset. An ideal ML algorithm has a “generalized” idea of the data points on which it was not trained. Meaning that this ML algorithm has such an effective understanding of what is happening in the dataset with least number of equations that it is now able to understand what could happen if something is to be predicted outside of its training dataset (Q2 – algorithm’s ability to predict the unseen data – should be maximum). L1, L2 regularization techniques used in ML are example of that. Now the ML algorithm is not just interpolating proportionally the points in between, it has its own mathematical justifications to decide whether and how to interpolate aggressively or not – in order to predict the realistic outcome.

Ockham’s Razor thus proves to be important in the age of AI to select efficient algorithms, efficient algorithms ensure efficient use of power, resources thereby the investments.

Parsimony in Psychology – Morgan’s Canon

In very simple words, I would say three words to explain what this means – “Life of Pie”.

The movie Life of Pie has a moment when Pie’s father tells him that the emotions which Pie is seeing in the Tiger Richard Parker’s eyes are mere the reflection of how Pie feels the tiger would be feeling i.e., hungry in that specific case.

In animal psychology (Comparative Psychology) researches, Morgan’s Canon asks scientist to not over-attribute any human quality that humans possess to animals without any concrete basis.

“In no case is an animal activity to be interpreted in terms of higher psychological processes if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development.”

The scene from Life of Pie strongly resonates with Morgan’s canon.

There is a reason why Morgan established this idea. We humans have a tendency to see human form in everything that is not even human – this is anthropomorphism. While studying animals, these anthropomorphic tendencies would mislead each and every study because other animals and human share many common things. Unless there is no strong evidence to justify the human like intelligent behavior the simplest explanation should be selected to justify the behavior of the animal in their psychological studies.

These are some of the examples where Ockham’s razor proves to be very valuable.

Bad Consequences (limitations of Ockham’s Razor)

There is other side to simplification of things, we will now see how people misinterpret the principle of parsimony thereby Ockham’s Razor.

Universe might prefer complexity to exist

In the pursuit of the theory of everything, Einstein himself was confused that “how could God play the dices?” How can one bridge the gap that exists between the theory of relativity and the quantum mechanics. One explains the heavenly objects and the other explains what lies at the bottom of the bottom of particles which make the universe existent.

One will realize that there is more than what we are using in current theory which needs to be considered to explain the reality in a better way.

One reason why Einstein was genius of all times is because he knew that something was missing in his theory. He was not ashamed of the complexity the theory of everything may carry. Even while speaking about his elegant theory of relativity Einstein had this opinion:

Artificial General Intelligence (AGI)

Those who are actually working in the field of AI would explain this to you that how difficult it is to create an Artificial General Intelligence (AGI). Even though we have some of the greatest chat-bots, AI assistants, AI agents, they are experts in executing specific tasks only. They can immediately get biased, they can be fooled easily, bypassed easily.

The key reasons behind these shortcomings are many. The AI tools perform the best when they are designed to perform specific tasks, they lack common sense like the humans do, they lack the emotional dimension in the decision making (one of the important aspects of how humans generalize the understanding of their surrounding), they cannot directly build the bridges between their algorithms unless enough data is provided. AI doesn’t have intuition which humans have developed over the thousands of years of natural evolution.

It is also important to understand how greatly we underestimate the computation and decision-making capability of our brains and how much power it takes to replicate the same in machines. 

So, maybe complexity is prerequisite for AGI and thus the enormous number of resources that will be required to achieve it.

Human like intelligence in Animals

The story of Koko and Robin Williams could be good example to explain this. Koko – a female gorilla was trained in American Sign Language (ASL) by Francine “Penny” Patterson. Penny called this language as Gorilla Sign Language (GSL).

Penny with Koko

There is a very famous video of the meeting between the movie actor Robin Williams and Koko. Soon after the death of her gorilla friend Michael, Koko met Robin Williams and she laughed after a long time along with Robin, she played with him, she even recognized Robin from his movie cassette cover.

Robin Williams having fun with Koko

When the instructors of Koko told her about the death of Robin Williams, she expressed her grief by signaling the instructors if she could cry, her lips were trembling in grief. See the emotional depth she had just like normal humans do.

Dolphins are also one good example to demonstrate human like intelligence in animals.

This means that Ockham’s Razor, Principle of parsimony or Morgan’s canon are of no use. What is happening here? What goes missing during the oversimplification? What are we misunderstanding?    

What goes missing in simplification?

The main problem with Ockham’s razor or its any other equivalent philosophies is the convenience they bring. Just like by collecting a biased data you can actually prove anything wrong which in reality is right, in the same way people misinterpreted the principle of parsimony.

The key reason for William of Ockham to support the principle of parsimony was because he was a nominalist. “Nominalism” says that there is nothing common between anything and everything that is there in reality. Everything has its own individual nature and what we see common in many things collectively are just the ‘names’ given to them. The red which we see in blood and in rose is just the name of the color and there is nothing like red which actually exists on its own.

This means that the color which we see in things, there is no such thing as color in its absoluteness, it is just some signal our eyes generate to tell brain the difference between the light absorbed and light reflected or the temperature of the surface of the object.

So, William of Ockham posed that as everything has its own attributes individually, when you are trying to create a philosophy for a group of things, you should consider only those individual attributes which are necessary to create a theory.

(William of Ockham himself drifted away in his ideas of Parsimony and Nominalism; I will discuss that specifically in the Philosophy of Nominalism next time.)

What people still misinterpret today when they talk about Ockham’s razor is “to select the simplest explanation to things”. This is not what he meant actually.

Same is the story with Morgan’s Canon. Morgan’s main intent was to have a concrete justification when someone is explaining human-like behavior in animals. His idea was that the conclusions should be reasoning-based and not based on the observation that animals in the study had that specific type of intelligence. The idea was to observe without any preconditioning, prejudice or any impression or expectation.

I have already explained how Einstein was a genius; he was very well aware that during creating the very elegant understanding of the universe he might have missed something on the expense of simplification.

The standard mathematical model in particle physics looks like this (maybe sometime in future I will be able to appreciate and explain it to its core):

Context is everything

Now you will be able to appreciate why Ockham’s razor is a tool and not the final truth. People exploit Ockham’s Razor to demonstrate their philosophical grandeur and simplify the meaning to their favors consciously (sometimes unconsciously).

What people ignore is the purpose of the chopping unnecessary parts in any process to develop understanding, philosophy or theory. The goal was never to simplify things, the goal was to remove things which would interfere in the process of testing our hypotheses.

People most of the times miss the point of parsimony which is to make a realistic attempt to check how and why our understanding of things which we have and the real nature of things differ, how can we fill the gap between what we theorize, what we can test and what real there exists.

Context thus plays very important role in every pursuit of knowledge, even in the knowledge of the self. It is important to understand the boundary conditions of our knowledge. One should know where their beliefs (even if they are true) can be limited, can be challenged, can be difficult to prove because what we know is just a drop, what we cannot know is ocean.    

I think what people miss in simplification or parsimony is the context and context varies from situation to situation.

Scientifically, Newton’s laws of gravitation have no problem when we are talking about our solar system. In fact, they are so accurate that modern space missions still rely on these laws. There rarely is any need to use the science of black holes in most of such missions.

The context is the precision of deciding the trajectory of objects in solar system.

But, when it comes to Global Positioning System (GPS), the theory of relativity becomes important. The bending of space time due to earth’s mass and the slowing down of time for navigation satellites from it and the time adjustments for atomic clocks at these two points matters a lot. Newton’s laws cannot explain that.  

The context is how precise can the time be measured and how the difference in time can be connected to the understanding of the position of the object around the globe.

It is very easy to demonstrate how Ockham’s razor still remains important in scientific community and how scientists are aware of its limitations.

It becomes problematic when we try to understand and justify life with it.

The problem is that we get to decide the context (most of the times)

Call it a blessing because scientific community is always in the state of its own renewal because it relies on objective evidences, but it is still not immune to missing context or wishful context. (The falsified biased scientific studies published to create confusions are best example of that.)

The best example of losing context while still being scientific or unbiased is the Debates on News channels or any debate (sadly) that exists on popularity. Soon you will realize that the context of most of such debates is to entertain people, create controversies and not find the ultimate truth or facts.

In the very opening of this discussion, I had explained how our brains try to optimize processing to save energy for better tasks to guarantee better survival. The death of our own beliefs, our identity is also failure to survive. Psychological, ideological death is as equal as the actual death, maybe it is more painful than real death for almost all of us. Religion is one stream of such ideologies where people are ready to die physically just because the religious beliefs, they live for should remain alive. Most of the people are scared of mathematics not because it is too complicated, they fear math because it shows them the vulnerabilities in their process of step-wise thinking, same people can be expert dancers, master artists, master instrument players which involve rather more complicated mathematical manipulations – music in a simple way is manipulation of certain sound wave-forms. The music theory, harmony, color theory, physiological manipulation of body with the rhythm, and sound are all purely mathematical concepts. It’s just that we don’t want ourselves to remain in the states of vulnerabilities for longer times. It’s equivalent of exposing cover to our enemy thereby reducing our chances of survival.

The thing is that the tendency of nature to choose the path of least resistance gets reflected in our own nature too. Which is why simplification and Ockham’s Razor seems attractive. But at the same time, we forget that it is the same nature whose deliberate and continuous actions against the adversities made us who we are, made impossible things possible for us.

Daniel Kahneman has explained the two cognitive systems our brain uses in his book Thinking Fast and Slow.

System 1 is fast and intuitive good for repetitive tasks but bad at finding biases, errors, hostile to new and complicated scenarios.

System 2 is slow and deliberate for analytical and reasoning-based tasks but is not effective for routine tasks.

The people who exploit Ockham’s Razor (even William of Ockham himself! –  this story will show up in post on nominalism!) are oversimplifying things because the belief they have is justified through it. It will stand some limited tests but the moment it is exposed to universal tests they fail. And that is how religions, sects, faiths operate when they are blinding people from the real truths. I am not saying religion is bad, I am saying how objectivity in religion can be used to show its scientific nature and still fool the people. Same can happen in scientific communities, all of the pseudo-scientific concepts are one great examples of that.

Now you can see the problem. People want to create understanding of the surrounding not because they really want to understand it. They want to do it because it will feed the beliefs they already have and Ockham’s Razor or the principle of Parsimony is a great tool to facilitate that. In the end, it is just a tool. How it impacts the creation is solely based on the intent of the one who is using it.

That is exactly why when you are questioning something or are standing against something or supporting something ask yourself this one question:

Are you doing this for understanding the reality or to feed your own wishful picture of reality?

So, whenever you are trying to understand something make sure that your context is to really understand the thing and not expect it to be in certain thing you wish. Remember, you are the controller of the context and it is very easy to fool ourselves.

Further reading:

The Essence of Nominalism

Philosophical fate of AI and Humans

Alan Turing was the very first person in the world to formally ask- “Can machines think?” The ideas he presented in his famous paper has laid the pathways leading to the creation of modern computer science and today and tomorrow of artificial intelligence. There is no doubt that there will be a time when machines would be able to think just like humans do, but that should not be a negative aspect. There will be practical limitations to a human-like thinking machine too. So, the game would never be single sided. This should push humanity on a completely new path of evolution. That is also how we have become the humans we are today from the primitive apes.

Alan Turing’s world famous paper on future of human-like thinking ability in machines

The holy doubt – “Can machines think?”

We all know how modern machines/ computers have great abilities to make systematic thinking and take decisions accordingly; this is obviously attributed to the very programming embedded into them by us human beings. Many breakthroughs in storage capacities of computers, size of computers, efficiency of these machines, computation capabilities, evolution of programing languages, intersection of neuroscience and computer science, accessibility of these highly powerful machines to masses have shown world that such machines can do amazing marvels.

You know where I am going with this. Not mentioning Artificial Intelligence in these breakthroughs would be a straight crime. AI has unlocked a totally different capability in computing for which some are optimistic and some are fearful. In a crude sense, how AI stands out from other concepts of computing is its ability to change it programming to achieve given goal. This concept is very normal even for today’s child.

But, would you be open to such self-programming machine 75 years ago? A time when there were only mechanical calculators, electronic computers were in their infancy and were created only for certain restricted problem solving and number crunching. Even the experts of those times found this idea foolish because of the practical limitations of those times. How could a machine think like a human being when for doing some mechanical number crunching it takes such many resources, doesn’t have its own consciousness, its own soul, has no emotions to react to given stimuli? In simple words “thinking” is somehow associated as a special ability humans got because of the soul they have, the conscience they have (granted by nature, the Creator, the Almighty, the God or whatever but some higher power)

It is our tendency as human beings to have this notion of being superior species amongst all which brings in the confidence that machines cannot think. That is why this idea seemed foolish, but now we are comfortable (to some extent but not completely) with the idea of thinking machines.     

Alan Turing – a British mathematician, the code breaker of Enigma, the man who made Britain remain strategically resilient in World War 2, the Father of theoretical computer science wrote a paper which laid down the blueprint of what the future with AI would look like. For the times when this paper was published all the ideas were seemingly imaginary, impractical, and totally impossible to bring into the reality. But as the times changed, Alan’s ideas have become more and more important for the times in which we are living in and the coming future of Artificial Intelligence.

Weirdly enough, this paper which laid the foundations of artificial intelligence – thinking machines was published in journal of psychology and philosophy called “Mind”.

The world-famous concept of ‘Turing Test’ is explained by Alan in this very paper. He called this test as a game – an “Imitation Game”.

The paper reflects the genius of Alan Turing and how he had the foresight of the future – the future with thinking machine. After reading this paper you will appreciate why and how Alan was able to exactly point out every problem that would rise in future and their solutions. He was only limited by the advancements not happened in his time.

The Imitation Game   

Alan posed a simple question in this paper –

Can machines think?

The answer today (even after 75 years) is of course a straight “NO”. (Deep down we are realizing that even though machines can’t think they are way closer to copying the actions involved in thinking or “imitating” a thinking living thing)

The genius of Alan Turing was to pose practicality to find the answers to this question. He created very logical arguments in this paper where he used the technique of proof by contradiction to prove the feasibility of creating such ‘thinking machine’. The AI which has evolved today is the very result of following Alan’s blueprint for making thinking machines.

The famous Turing Test – the Imitation game is a game where an interrogator has to tell the difference between a machine and a human being by the responses they give to his/her questions.

The machine is not expected to think like humans but at least imitate them. The responses may feel completely human but it is not a condition or compulsion that machine should exactly think like a person. This practicality introduced by Alan and his arguments built upon this idea shows what are our limitations when we are actually thinking or making any decisions. This paper will change and also challenge the way we think or do anything. This paper might humble you if you think that we are superior beings because we can think and have/ express emotions. (Trust me you would also question ‘What is love?’ if love was your next answer to justify our superiority after reading this paper but that is not what Alan was focusing when he wrote this paper.)

The idea is not about creating an artificial replica of human, it is to create a machine which would respond just like humans do, the goal is to make their responses indistinguishable from ‘real’ human beings.

There are hundreds of simplified explanations on Turing test (ask Chat GPT if you want) which Alan has discussed in this paper but that is not my interest of discussion hereon.

I will be focusing only on the arguments made by Alan to prove why it is completely practical to create human-like thinking machines. My intent in doing so is that to show how we as humans can also be challenged by our practical limitations. These arguments also show a way to humans where they will get overpowered/ surpassed by AI. This does not mean that AI will eradicate humanity, rather it shows new pathways in which humanity would evolve. So, for me the arguments end on an optimistic note. Surely AI will take over the things which make us who we are but it will also push us into some completely unconventional pathways of rediscovery as the smart species.

The way in which Alan intended the Imitation game was the mode of question-answers – an interview. You would question why didn’t he think of a challenge where exact human like machine need to be created – that would be more challenging for the machines. I think, the idea behind rejecting the necessity for a machine to be in human form is like this-

The creation of human body is very similar to cloning a human body or augmenting the human parts to a mechanical skeleton. What is more difficult is to impart the consciousness and the awareness which is (supposedly) responsible to impart thinking in humans. So, even if a fully developed machine exactly looking like human being is in front of you and you are unable to tell that it is a machine, the moment that human-like machine would start expressing its thoughts everything would be easily given away.

In simple words, Alan was confident that the biological marvels, genetic engineering, cell engineering would easily take us to the physical replication of human form. What would be difficult is to create a set of logics (or self-thinking mechanism) which would demonstrate human like (thinking) capabilities. And such abilities can easily be checked by mere one on one conversation. Such was the genius of Alan Turing to bring such complexities using this simple experiment of Imitation Game.

We as human beings have certain insights, intuitions (I don’t want to use this word but don’t have any alternative word) which gives away if it is a machine or a human.

What Alan did masterfully and why he deserves full credit is that he pointed out the factors which can make machines respond and ‘think’ more like humans. While creating the confusions about the nature of human mind, consciousness, awareness, thoughts and their limitations and ambiguity, Alan also gave the possible arguments to solve these confusions.

Alan proves that human-like thinking machines can be created and he proves this by contradiction of the objections raised against this idea. I am diving deep into these objections hereon:

  1. The theological objection

The rigor that Alan used to prove his point deserves appreciation. Despite being a logical thinker and mathematician, he cared to answer the religious point of view, he wanted no stone left unturned while making an argument.

Alan aggressively (verbally) hammered the idea of God’s exclusivity to grant the immortal soul to only humans, the soul responsible to make humans think. Alan says that if soul is the reason, then animals have souls too. The true comparison then should be between living and nonliving things to support the point that machines cannot think. It is because they are nonliving things they have no soul so they cannot think.

But if the great almighty can give soul to an animal, then why this omnipotent God decided to not give same souls to the machines? Alan knew that any blind theologian would find a contrived argument to prove this idea but he clarifies his point by presenting the historical mistakes religious institutes committed because the truth was hard to swallow. Alan gives the examples of Galileo who presented that earth was not the center of the universe, against the ideas of Church. Later church was proved wrong.

So, even if the religious arguments may seem easy to understand, easy to ‘swallow’ but if they are not fitting in the logic, it makes no sense to take them forward. The theological inconsistency ‘As machines have no soul granted by the God, they cannot think like humans’ which Alan pointed out  was totally false. He justifies this point using the logic of God remaining the ultimate creator.

Alan explained that if we are stealing the powers of God to create a human-like thinking ‘thing’ its not a crime or a blasphemy. Does procreating and making children “to whom also God grants the soul for thinking” mean crime? In similar spirit ‘machines’ – thinking machines are our children whom to God should bless with his powers.    

“In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.”

No doubt he would also have been a great priest if he had thought of changing his career to theology.

  1. The ‘Heads in the Sand’ objection

Alan gives worst case scenarios on the superiority of human species out of all species. What if we are “the superior” species? If that is true then there is no reason to worry about thinking machines, they won’t surpass us.

But what if what we know is wrong? We have been proven wrong many times in history. What if we are not the superior species? Then there is no sense in blindly believing that we are superior. Rather this illusion of superiority steels us from the chances to fight the battle of superiority.

So, in either case, we cannot run out of the fate of thinking machines Vs humans. We may fake it, run from it, hide it from rest of the population but it is not in our favor if we do so.

“We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position.”
  1. The Mathematical Objection

Very beautifully Alan brought the Gödel’s incompleteness theorem to prove his mathematical argument. According to Gödel’s incompleteness theorem, if we start to prove every mathematical argument there exists in the universe, we end up into some arguments for which there exists no proof. In order to ensure that the whole mathematical system remains stable, consistent on logic one has to accept those arguments true. So, once such logically unprovable but true in existent reality statements are found in nature they create a new system of mathematical understanding.

In simple words, every mathematically logical system is inconsistent in the end, in order to remove that inconsistency a new rule must be accepted which create a new system of mathematics. (Which again would be inconsistent)

Further oversimplification goes like this,

A farmer wouldn’t know how to make a shoe. So, he would need knowledge of a cobbler. A cobbler wouldn’t know how to make metal tools, so he would need help of blacksmiths. Even if they have each other’s knowledge, skills they must accept certain thumb rules passed down from their ancestors (which are always true but unprovable) to master each other’s skills.  

So, even if you are creating a thinking machine based on purely mathematical system the mere limitation of mathematics will stop it from overpowering, surpassing humans.

This also does not mean that thinking machines are defeat-able. A machine with one mathematical system in totally different domain could support this logically inconsistent system just like the villagers with different professions.

Alan Turing’s doctoral thesis contains the ideas of Gödel’s Incompleteness theorem so it is a joy to read these arguments in this paper. They are well formed and super-intelligent.

(If you are really interested what this argument means, you can research the efforts that went into proving Fermat’s last theorem. A new field of mathematics had to be created to prove this simple to explain but difficult to prove mathematical theorem.)

There will always be something cleverer than the existing one – for humans and for thinking machines too.

“There would be no question of triumphing simultaneously over all machines. In, short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.”
  1. The argument from Consciousness

Even if the machine is feeling and thinking exactly like a human being, how could the “real humans” know that it does so? – Alan’s new argument.

“The only way to know that a man thinks, is to be that particular man. It is in fact the solipsistic point of view. It may be the most logical view to hold but it makes communication of ideas difficult.”

Communication between machines and the humans and its quality would be key proof to understand whether the machine thinks like human beings or not. Even if the machine is really thinking exactly like humans, it is futile if it cannot communicate so to humans.

(That is exactly why The Turing test with mere typed communication is more than enough to check the thinking ability of machines.)

It is the great philosophical mind of Alan to use the limitations of Solipsism to justify his point. According to solipsism all the world exists in the mind of the person because if the person dies then it doesn’t matter if world is there or not.

The key limitation of solipsism is that your survival is not directly connected to your mere thinking. If I think ‘I am dead’ that does not immediately kill me. If I think that I have eaten a lot without actually eating anything, that doesn’t end my hunger in ‘reality’. So, reality is not only your mind.

Also, solipsism fails to answer the common experiences we have in a group. If my mind is my world, I can create any rules for my world and things would always go as I desire. But that doesn’t happen in reality. There are certain ways, truths which are common to all of us that is why our world is not just our mind, rather it may be a shared world. You alone are not the representation of whole reality.

So, even if we accept that the machine ‘inwardly’ thinks like human being, it has to share some common truths to the interrogator to prove its humanly ways of thinking.

“I do not wish to give the impression that I think there is no mystery about consciousness. There is for instance, something of a paradox connected with any attempt to localize it. But I do not think these mysteries necessarily need to be solved before we can answer the question…. (the question – can machines think? Can they at least imitate humans? – the Imitation Game)”
  1. Argument from various disabilities

Alan is challenging the idea that even if machines are successful in thinking exactly like humans, they won’t be able to do certain things which humans can do better.

It’s like a human saying to a thinking machine –

“You machines can think like us but can you enjoy literature and poetry like we humans do, can you have sex just like humans do, enjoy it and procreate just like we (human) do? This is exactly why your thinking is not a human thinking.”

The key point Alan is trying to prove is that people always need a justification of given machine’s ability (through its ways of working, maybe its architecture, its technology, its components, its sensors) to prove that certain capability of the machine. When we are showing these justifications, we are also telling people indirectly what it cannot do thereby its disabilities. One ability would point to other disability.

People do not accept black box models in order to justify ability of the machine.

“Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities.”

In same fashion one argument is that even if machine could think like humans, it is difficult to have its own opinion. Alan strikes that too.

“The claim that a machine cannot be the subject if its own thought can of course only be answered if it can be shown that the machine has some thought with some subject matter.”

The key disability which was preventing Alan from creating a working thinking machine was the enormous storage space. You will appreciate this point today because you know how drastically storage capabilities have improved over the time. These improvements in storage created the AI we see today, although processing power is also on factor and there are other factors too but it boils down to the ability to simultaneously handle lot and lots of data.

Alan had this mathematical insight that once the storage ability is expanded enough the thinking machines is a practical reality. (Now researchers are not only working on to further improve storage capability but special efforts are also taken to effectively compress data. Ask Chat GPT about the Hutter Prize)

So, Alan makes a point that having variety of opinions in order to ‘think for itself’ machines don’t need logic, they need enough storage space just to process them simultaneously to create a new thought. In terms of humans, the more information and logic you can handle the crisper your understanding are. Same would be the case for thinking machines.

“The criticism that a machine cannot have much diversity of behavior is just a way of saying that it cannot have much storage capacity. “
  1. Lady Lovelace’s Objection

Charles Babbage was the first person to technically create calculator with memory – a programmable computer which they called Analytical Engine. Even though he knew how the Analytical Engine works Ada Lovelace created programs and published them to the masses to prove the effectiveness of the Analytical Engine. She was the first programmer of computer.   

Lady Lovelace’s key argument is based on the idea that the computer thereby a thinking machine cannot think for itself because it can only use what we have provided it. As we have provided whatever we know and have it cannot think outside of that information and generate new understandings, The machines cannot think “originally”.

Alan strikes down this argument easily using the idea of enough storage space. If the machine can store large enough data and instructions then it can create new inferences, original inference.

“Who can be certain that ‘original work’ that he has done was not simply growth of the seed planted in him by teaching, or the effect of following well-known general principles.”

Alan questioned the very nature of originality. Only a genius can do this in my opinion. Alan showed the world that the things which we call original are inspired, copied from something already existent. It is just matter of how unknown we are to this new thing.

He builds further upon that saying that if machines can think originally then they should surprise us. That is reality. Machines do surprise us by using unconventional approaches to our daily tasks. 

Alan links new argument for further justification, if machines can think originally then they can surprise us. In order for us to not get surprised we must get immediate understanding of what machine presents which never happens when such events happen. So, machines can think originally and can surprise us.

“The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject, This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it.”

What a brilliant argument!

  1. Argument from Continuity

“The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse. It may be argued that, this being so, one cannot expect to be able to mimic the behavior of the nervous system with a discrete-state system.”

Alan talks about an attempt to create thinking machines by mimicking nervous system which is a continuous system. A system which works in wave, signals (analog) and not in ones and zeros (discrete).

Alan says that even if we use such analog system in Turing test, the outputs it would give would be probabilistic instead of definite. This will actually make the interrogator difficult to distinguish human response from the machine one. Humans would be more frequently unsure and will give such probabilistic answers more frequently.  

  1. The argument from Informality of Behavior
“If each man had a definite set of rules of conduct by which be regulated his life, he would be no better than a machine. But there are not such rules, so men cannot be machines.”

The idea that machines work on certain defined rule even if they can alter their own program by themselves in order to think like humans, it feels obvious that they will be more formal and stuck to their rules while responding. This formality would give away their non-human nature.

Alan questions the very nature of what is means to have laws in a logical setup. Taking support from the Gödel’s Incompleteness theorem, not even single system – single logical system can confidently remain purely on its laws. It would assume some arbitrary point to make some sense out of given data even if it is using some mathematical frameworks. (Remember the simulations where you put garbage in and the simulations runs perfectly giving garbage out. But you know its garbage because you have certain test to judge the output with reality which are objective.)

There is no such objectivity to judge informality of a system – the word and logic itself says it all. Our search for formal laws would never end and this will always keep on creating new laws and new inconsistencies and informalities. There is no end.    

“We cannot so easily convince ourselves of the absence of complete laws of behavior as of complete rules of conduct. The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, ‘We have searched enough. There are no such laws.’”
  1. The Argument from Extra-sensory Perception
“The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be one of the first to go. This argument is to my mind quite a strong one. One can say in reply that many scientific theories seem to remain in practice, in spite of clashing with ESP; that in fact once can get along very nicely if one forgets about it.”

Again, Alan left no stone unturned. He made sure that even the pseudo-science fails to support the idea that machines cannot think like humans.

He explains that even if the human competing against the machine mimicking humans has telepathic abilities to know states of the machine or even the interrogator, it would actually confuse the interrogator. The only thing such telepathic person can do differently is to under-perform intentionally which again would confuse the interrogator.

The idea is that even when we are not sure of how such supernatural things works our current understanding of things and their workings are just fine. The supernatural things are not interfering in our formal understanding of nature and reality.

The implications of Alan Turing’s Paper on Computing Machinery and Intelligence

All the ideas explained by Alan in this paper are responsible for the modern technologies like efficient data storage, data compression, artificial neural networks, self-programming machines, black box models, machine learning algorithms, iterative learning, data storage, manipulation thereby data science, analog computing, self-learning, supervised learning algorithms, Generative Pre-Trained Transformers (GPTs) and what not.

This paper is holy grail for not only modern computer science but also for the literature and popular culture. Once you appreciate the ideas in this paper you will be able to see the traces of these ideas across all the modern science fiction we are consuming all the time.

Alan created practical ideas which were possible to implement in future based on the coming technological revolutions he foresaw. He logically knew that it is possible but the genius of him was to lay the practical foundation of what and how it needs to be done which is guiding our and will guide future generations.

Conclusion

What is there for humans if machines start thinking like humans?

For this, I will address each argument posed by Alan

  1.  The theological objection

God will actually bless us because we extended his (or her I don’t know) powers to create something like his own creation through thinking machines.

  1. The ‘Heads in the Sand’ objection

Even if thinking machines surpass us, we have to live with it and create our new ecosystem to ensure our survival. Even though for given times we are superior species, other species are existing with us in the same time with their special abilities. There is no running away from any possible outcome of this scenario.

  1. The Mathematical Objection

The mathematics itself restricts a single machine from knowing everything. So even if multiple machines come together to create superior understandings same would happen for humans. There will always be this race of superiority, sometimes machines will lead sometimes humans will lead. There is no conclusion to this race as far as the inherent flaw of mathematics goes.

  1. The argument from Consciousness

A machine has to be the communicator of its human thinking, it cannot remain in the dark abyss of self-cognizance and remain away from humans. If a machine starts thinking like humans, we all would definitely know about it. A machine has to communicate its ability of awareness to, it will a surprise but a very short lived one.

  1. Argument from various disabilities

If we don’t know how machines think like human that would not prevent them from thinking like humans. We have to accept the black boxes through which machines would think like humans. That is the only sane way out. We humans too are filled with disabilities but they are not directly linked to the ways we are able to think.

  1.  Lady Lovelace’s Objection

Machines will surprise us, they can also create original ideas, because what we call original is something that lies out of the limits of our current thinking. Rather it is an optimistic idea that if machines could think like humans do then they may give us totally new ideas for new discoveries, breakthroughs.

  1. Argument from Continuity

Continuous thinking machine or discrete thinking machine both can confuse humans if they achieve their thinking potentials. So, there is no point in creating an analogue thinker to beat digital thinker. We ourselves are an analogue thinker.

  1. The argument from Informality of Behavior

No system will have all laws already established, the system has to keep on creating new laws to justify new events, outliers. The process is never-ending. So even if machines surpass in human thinking we too have the advantage of informality to make the next move.

  1. The Argument from Extra-sensory Perception

Even if the supernatural abilities are proven be existent, they will have less to no contribution in the thinking abilities of machines. So, if you are a telepathic reader a human like thinking machine can fool you without exposing its real machine identity.

Going through all this you will appreciate how limited our human thinking is. There is no doubt that there will be a time when machines would be able to think just like humans do but that should not be a negative aspect. There will be practical limitations to a human-like thinking machine too. So, the game would never be single sided. This should push humanity on a completely new path of evolution. That is also how we have become the humans we are today.

Further references for reading:

  1. A. M. TURING, I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433
  2. Understanding the true nature of Mathematics- Gödel’s Incompleteness Theorem
  3. Questioning Our Consciousness – Solipsism

The Utility of Human Life and Morality

Why doesn’t Batman kill all his villains once for all? Why the sentence passed by judicial systems in certain heinous and extraordinary crimes feel unjust for the pain victim went through? How one can tell that given person was right or wrong when he/she had no intent of doing it? Can you just look at the end consequences of the actions and decide right or wrong for such scenes? Jeremy Bentham’s philosophy of Utilitarianism tried to answer some of these questions but it revealed certain flaws in our ways of judgement. Even though hedonism and utilitarian philosophy create an objective model of morality, they fail to address the subjective and human aspect of any moral discussion. It reveals that the purpose of living is not mere happiness but self-improvement thereby mutual and overall improvement.

How to judge morality and its impact on human life?

The Moral Dilemma

A healthy sense of good and bad makes a society livable. There are some special, rare events that happen in the society we live which challenge our idea of what is good and what is bad. There are uncountable offenses and also in varying types which create problem of who should actually be punished and what should be the punishment.

An eye for an eye will make the whole world blind.

Mahatma Gandhi

If this is really the case, the law and order should punish the victim in such a way that it prohibits the future perpetrators to not do such crimes again. But again, as this above mentioned quote goes if the punishment given for the crime is equally dangerous then what exactly are we trying to establish through such punishment?

It’s like that scenario where murdering a murderer creates a new murderer so the net number of murderers in the society remain the same. An Italian philosopher called Cesare Bonesana di Beccaria had given a thought on this. In his book ‘Of Crimes and Punishments’ he discusses that if the punishments grow on crueler and crueler the net mindset of people also grows crueler. It’s like how water levels itself irrespective of the depths. The baseline of what is right and wrong furthermore what is more wrong and what is more right shifts up. Crueler and crueler crimes reduce the sensibility of people of that society. This could be one reason why people always argue that the judicial system does not provide equivalent punishment as a justice to the victims of certain heinous, exceptional cases of crimes. (Although there are many other factors to make such decisions.)

“In proportion as punishments become crueler, the minds of men, as a fluid rises to the same height with that which surrounds it, grow hardened and insensible; and the force of the passions still continuing, in the space of a hundred years the wheel terrifies no more than formerly the prison. That a punishment may produce the effect required, it is sufficient that the evil it occasions should exceed the good expected from the crime, including in the calculation the certainty of the punishment, and the privation of the expected advantage. All severity beyond this is superfluous, and therefore tyrannical.”

Cesare Beccaria, Of the Mildness of Punishments from ‘Of Crimes and Punishments’

In similar spirit, the relationship between Batman and Joker can be understood. Joker never cares about killing people he will try to stretch the limits of batman in every possible sense where innocent lives are at stake. Batman has one solution to stop all this – to kill the Joker. But with a high moral ground Batman would never kill Joker. What is the motivation behind such character design of Batman. Batman knows that killing Joker would solve the problem once for all. Believe me, this is not just a fictional comic book scenario. The reality that we live in has uncountable such scenarios where exactly same decision dilemmas occur.  

The famous trolley problem also points to somewhat similar moral dilemma. Where should the trolley be directed if one track has single person and another has 5 people tied to the track? Nobody wants blood on their hands.

But the same trolley problem becomes interesting if you start adding additional attributes to the people who are on track.

What if the single person tied to the track is a scientist with the cure for cancer and the track with five people are criminals? Then definitely you would kill the five criminals instead of the single scientist.

Did you notice what change made us to decide faster? The moment we understood the consequences of our actions we had the clarity of what is right and what is wrong. Our moral compass pointed to North the moment we foresaw the consequences of our actions.

The foundation of some of the principles of morality are based on similar ideas. Utilitarianism and Jeremy Bentham’s an English Philosophers ideas have contributed to the ideas of morality for humanity, especially when we are talking about the human society as a whole. The ideas put by Jeremy Bentham also faced severe criticism, we will see those in detail too. But the key intention of my exploration is to understand how we create the meaning of Morality and how subjectivity, objectivity totally change the way we perceive morality. In the end we may reach to rock bottom questioning the morality itself to be nonexistent – and if morality is non-existent then what separates human beings from animals? (I hope to enter in this territory with some optimism, I don’t know where will it end.)

Utilitarianism

As I already explained in the trolley problem that by adding one simple, short part of information shifted our moral compass in (supposedly) proper direction. What did this information add in the dilemma to make it solvable?

The answer is the foresight of consequence. Once you saw the consequence it leads to you got the hold of what is right and what is wrong. You decided one side to be right and other one to be wrong. This foresight of consequence helped you to weigh the ‘right’-ness of your decision.

Utilitarianism is based on the measurement of morals based on the consequences of the actions you take. What is the other side of taking actions? It is ‘the intent’. This is where the fun game begins.

Many philosophers are always fighting over morals based on the intent of the person and the consequences of the actions they take. For example, thinking of murder (pardon my thinking) makes me less of convict than really murdering someone. My thinking has not led to the loss of the person I hate. Utilitarianism thus calls out for the construct of morality based on the actual actions and their consequences; it’s like saying ‘what a man is more about what he does instead of what he thinks’.

Hedonism, Utilitarianism and Jeremy Bentham

Happiness is a very pretty thing to feel, but very dry to talk about.

Jeremy Bentham

Jeremy Bentham an English philosopher contributed to the utilitarian ideas of morality. He was not well appreciated in his home country due to the misalignment of his ideas of socio-political reforms with the British sovereignty of those times. The French translation of his works on law, governance gave him popularity in Frenchmen. Bentham was one of the people who pushed the political reforms during French revolution.

While reading Joseph Priestly’s Essay on the First Principles of Government, Bentham came across the idea of “greatest happiness for the greatest number” which motivated him to expand the ideas of utilitarianism.

Priestly brought the idea of “Laissez-faire” (‘allow to do’ in French)- a policy of minimum governmental interference in the economic affairs of individuals and society. Joseph Priestly developed his ideas of politics, economics and government based on the ideas created by Adam Smith (Author of the Wealth of Nations – the holy grail of classical Economics).

The Greek philosopher called Epicurus was the supporter, creator of hedonism. Hedonism defines ethics to pleasure or pain. According to hedonism that which gives pleasure is morally good and that which give pain is morally wrong. The idea behind hedonism is the aversion of pain to live an undisturbed life because anyways this all won’t make sense once you are dead. According to Epicurus – fear of death, retribution is pushing people to collect more wealth, more power thereby causing more painful life. The collection of wealth, power is done thinking that they can avert the death but that is not the reality. So, worrying about the death sucks out the pleasure of living the life which itself is equivalent of death.

Non fui, fui, non-sum, non-curo
(“I was not; I was; I am not; I do not care”)

Epicurus

So, epicurean hedonistic morality tries to maximize the pleasure. The other end of this idea is that if everyone tries to maximize their own pleasure (egoistic hedonism) wouldn’t it disturb others?

If I want to listen to a song on loud speaker while bothering my neighbors, what is the moral standpoint here?

The answer is the overall good of the system. So, if you neighbor also wants to listen music loud and overall loud music is good for the group then we are morally right to play loud music. (Just pray that the group has same music interests!)

So, Jeremy Bentham is known to rejuvenate this ancient philosophy of egoistic hedonism through his philosophy of utilitarianism.

The basic idea behind Utilitarianism is to maximize the utility of anything, value of anything. The utility can be increased by doing what is right which can be done by doing what gives more pleasure or by avoiding those things which increase or give pain.

Utility is a property which tends

  1. To produce benefit, advantage, pleasure, good or happiness
  2. To prevent happening of mischief, pain, evil or happiness

So, the right action is the one that produces and/ or maximizes overall happiness. Please understand that the word “overall” is important for Jeremy Bentham’s philosophy of Utilitarianism. Because from selfish point of views, what is pleasurable for one may not be pleasurable for others. (This is also where the certain philosophical problems of Utilitarianism are hiding, save this point for later.)

To solve this bottleneck of clarity, there are two types of pleasure in human life – one is happiness from senses, physical experiences and one is from intellect. The intellectual happiness is higher than the pleasure from senses. So, on personal moral dilemmas these two attributes can solve the problem.

All good on personal level but what about the moral decisions for the group, for society? Here, Bentham solved the moral dilemma by using the idea of “greater good for all”. When we don’t agree on what makes us happy together, making sacrifices in your happiness to make others happy is the solution. (Keep this idea parked in your mind.)

“Nature has placed mankind under the governance of two sovereign masters – pain and pleasure. They govern us in all we do, all we say and all we think.”  

Jeremy Bentham

Felicific Calculus – Measuring happiness

Jeremy Bentham is known as the Issac Newton of the Morality for developing the felicific calculus/ hedonistic calculus. Bentham pointed out the key factors which affect the net happiness and using this factors’ effect as a whole, one can quantify the happiness.

Following are the factors which affect the happiness:

  1. Intensity – how strong is the pleasure from the given action?
  2. Duration – how long does the happiness remain from given action?
  3. Certainty – what is the likelihood of given pleasure to occur?
  4. Propinquity – how soon/ immediate is the occurrence of the pleasure?
  5. Fecundity – what is the possibility that this pleasure will also lead to the newer pleasure(s)?
  6. Purity – what is the change that this pleasure will not bring some opposite sensation?
  7. Extent – how many people are affected?

If one considers these factors and the principle to maximize the communal happiness, most of the social moral dilemmas can be effectively solved.

So, according to this felicific calculus,

  1. Batman should kill the Joker for the greater good of the Gotham
  2. The trolley should go over the group/ person which creates more pain for the society
  3. Baby Hitler should be killed once we get the chance to travel back in time

You must appreciate the clarity which the felicific calculus brings. This clarity is very important for the policymakers, politicians while deciding the fate of the group, state, nation as a whole.

Now a simple question –

If batman keeps on killing the villains, won’t he become the greatest killer of them all? What would differentiate Batman from other villains?

What would happen if you were given false information about the nature of the people tied on track while riding that trolley? Could your wrong decision be undone? If it was the wrong decision then now ‘you’ are morally wrong, with the blood of the innocents.

You would kill baby Hitler only because you have vision that this baby will grow up to be the mass murderer tyrant. The mass murder hasn’t happened yet. So, now you are the killer of a ‘now’ innocent baby.

Maintaining same emotion, now you would appreciate why even for a strong judicial system giving capital punishment for rapists, terrorists is difficult morally. You would solve the problem for now because the act has been already done, the consequences have already happened (which is why moral judgement is effective as it relies on the consequences). Killing the perpetrators or punishing them with equal pain would definitely bring peace of mind using the principles of morality but that also degrades the morality of innocents who fell down from that morality. It is not matter of what one deserves because what bad happened to them, it is about how less human you will become once you perform that act of punishment.

Recall the quote of Beccaria in the early part of my discussion.

Killing joker will create fear among other villains but it also creates chance for the creation of even dangerous villain in future.

Killing baby Hitler doesn’t guarantee prevention of World War and mass murders, as our personalities are the result of our surroundings – another Hitler-like person would have emerged in such given circumstances. (I honestly don’t know if he/she would be worse or less harsh than the original one but you get the point – conditions anyways would have created another cruel person.)

Jumping out of the trolley seems the best way to run away from the pain of murder of other unknown people (joking). The trolley dilemma remains dilemma.

Also, the felicific calculus allows pain for small groups for the betterment/ pleasure of the bigger society. For example, according to this utilitarian idea killing few healthy convicted prisoners to save lives of many innocent people by harvesting the prisoners’ organ is justified. It is for the good in the end.

You see where this goes?

See the level to which any human or a group could go if they start justifying their moral rightness using these ideas. Using these principles any big group can overpower the minorities in morally right way. It is just a matter of time that the felicific calculus principles would get exploited for other “immoral” gains.

That is exactly why many people criticized the felicific calculus saying that a pig laying in the mud for his whole life would be happiest than a human being (Socrates to be specific) if Bentham’s calculus is used to decide morality.

In a crude way, there are two type of Utilitarianism which help to solve the problem to certain extent, but it is not a complete solution:

  1. Act Utilitarianism – to act for the greater good of all
  2. Rule Utilitarianism – to set rules in such way that no one inherently gets the pain or everyone is happy because actions and their consequences are bound by certain set rules in first place now

Happiness is not the ‘only’ and the ultimate goal – the limitations of Jeremy Bentham’s Utilitarian Philosophy

What people were not ‘happy’ with Jeremy Bentham’s felicific calculus was that it made humans more like machines and very objective. People don’t always want happiness for their or the group’s greater good. Exercising daily, reducing fat-sugar maybe painful but that guarantees healthy, illness free long life. Doing drugs isolates the person from pain but it impacts the long-term physical and mental health of the person. Hardships and pain make people to reach their difficult goals which is what is the real and ultimate happiness for them.       

Happiness is not always the goal of life, if one is completely tangled in the pleasures of life and if everyone is having same mentality then in the end no one will be happy, because as a group we all would never agree on what makes us happy; different environments in which we grew, our personal experiences, our upbringing, our motivations prevent us from creating a common definition of happiness.

The subjective factor of pleasure or pain is not present in Bentham’s philosophy of Utilitarianism. Building further upon that, the victim who has suffered from the morally wrong action will only be satisfied when he/she gets justice, not when they are made happier than their perpetrators. (This justice must again not be mechanical and objective like the felicific calculus.)

One more flaw of the Bentham’s utilitarianism is the imbalance between personal scenarios and the communal scenarios. In most cases, it demands personal sacrifice irrespective of their subjective morality for the betterment of the group. (that is exactly how many past cruel dictators have justified their moral correctness on their acts against the minorities.)

A British philosopher, Bernard Williams presented a thought experiment to highlight such flaw of the Utilitarianism.

In this thought experiment:

A botanist on his South American expedition is ordered by the cruel regime soldiers to kill one of the Indian tribe people. If the botanist fails to kill one Indian the soldiers would execute all of the tribe members.

So, if we implement utilitarian principles, then the botanist should kill one Indian to save the remaining all. That is morally right.

But on the other hand, one must also understand that the botanist has nothing to do with the cruel regime and even with the indigenous tribe members. He is under no moral obligation to do anything. The consequences are in such a way that whatever he will do he will be called morally wrong. Which in the end is wrong.

The utilitarian philosophy neglects this subjectivity and consequentialism while we are deciding morality of anything.

Maybe that is also why even when we have all the rules in place, penal code in place for all types of offenses, similar crimes – we have a judge – a subjective, consequential observer to grant the final justice.

You must understand that the discussion does not want to pose Utilitarianism as completely wrong idea. The intent of this discussion is to understand how to de-clutter a complex moral scenario and how to inject subjectivity in it so that the correct person will get the justice in the end. As we are human beings and not machines, every day brings new subjective scenarios with new subjective moral dilemmas. Direct implementation of utilitarianism may bring in the transparency in the moral puzzle but it is at the expense of oversimplification and loss of personal subjectivity, consequential personal point of view and also freedom of person to exist.

The ways in which Utilitarianism brings immediate clarity by elimination of some important subjective aspects is dangerous and limits the judgement of real morality. Friedrich Nietzsche had warned new philosophers in his book beyond good and evil about the philosophies which create such “immediate certainties” like Utilitarian philosophy creates-

“The belief in “immediate certainties” is a moral naivete which does honor to us philosophers; but – we have now to cease being “merely moral” men!”

Friedrich Nietzsche

Conclusion – If not happiness then what is the goal of being human?

Jeremy Bentham’s philosophy of Utilitarianism and the felicific calculus can help to decide the morality of what is good for all but it ignores the presence and worth of personal integrity, the well being of the minorities, subjectivity of the person in given consequences. It by default eliminates the possibility of humans remaining human beings instead it attributes them as the machine maximizing a targeted outcome (which is pleasure here).

So, the question remains – If we are not meant to maximize pleasure during our tenure in life because in the end after death there will not be anything to experience or gain happiness – if our existence and final purpose does not align with being happy then what exactly is the purpose of being a human being?

Based on my understanding on what many great people have commented about the purpose of life, I found that most of them point to remaining the human being you always were. I am not saying that the personality should remain the same, rather it should change and keep on upgrading itself till the end but the core should remain same or it should not degrade at least.

Some wrong events, injustice, oppression, cruelty will make you suffer, but that should also not vilify your human spirit. Once we let go the pursuit of happiness and chase the goal of being a better human being (or at least remain the human being you are) we can fulfill the purpose of our lives and also make other people’s lives better.

Once you will let go of such utilitarian, mechanistic setups of morality you will realize that people don’t need gods, religions, governments, judicial systems to keep in the check of right and wrong. Our inner compass is more than enough to take care of what makes us human beings, this inner compass is not about what is right and wrong, for me it is about what better version of yourself you would become if you act in that certain way. It takes care of what you are thinking and what would be the consequences of actions thereby resolving the dilemma of morality which got separated on the basis of either intent or the consequences.

I am highlighting the importance of inner personal human compass because the rules designed to keep morality in check would always need revision and the utilitarian philosophy would wait for the consequences to happen to decide the morality. The goal of human struggle to improve their current version to a better one does not need either of the metrics to decide the morality.

Imagine what the world would become if everyone started appreciating this inner human compass!

(For now, we can only imagine, but I am optimistic on this.)        

P.S. –

Even though the Utilitarian philosophy had many flaws, Jeremy Bentham contributed largely to bring in new political reforms, improve governance, establish penal codes in judicial systems, define sovereignty, reduce the influence of religious institutions on the lives of people and governments. His works were strategically maligned by some lobbies to lessen the impact of his other notable works. He was the proponent of liberty and freedom from religious influences on lives of people. The pushed for the establishment of a secular educational institute in London – now famously known as University College London. Jeremy Betham’s fully clothed wax statue containing his original skeleton remains in the entrance hall of the University main building upon his request.

Logarithmic Harmony in Natural Chaos

Mathematics is one powerful tool to make sense out of randomness but bear in mind that not every randomness could be handled effectively with the mathematical tools we have at our disposal today. One of such tools called Benford’s Law proves that nature works in logarithmic growth and not in linear growth. The Benford’s law helps us to make sense of the natural randomness generated around us all the time. This is also one of the first-hand tools used by forensic accountants to detect possible financial frauds. It is one phenomenal part of mathematics which finds patterns in sheer chaos of the randomness of our existence.

Benford’s Law for natural datasets and financial fraud detection

People can find patterns in all kinds of random events. It is called apophenia. It is the tendency we humans have to find meaning in disconnected information.

Dan Chaon, American Novelist

Is There Any Meaning in Randomness?

We all understand that life without numbers is meaningless. Every single moment gazillions and gazillions of numbers are getting generated. Even when I am typing this and when you are reading this – some mathematical processing is happening in bits of the computer to make it happen. If we try to grasp/understand the quantity of numbers that are getting generated continuously, even the lifetime equivalent to the age of our Universe (13.7 billion) will fall short.

Mathematics can be attributed to an art of finding patterns based on certain set of reasoning. You have certain observations which are always true and you use these truths to establish the bigger truths. Psychologically we humans are tuned to pattern recognition, patterns bring in that predictability, predictability brings in safety because one has knowledge of future to certain extent which guarantees the higher chances of survival. So, larger understanding of mathematics in a way ensures better chances of survival per say. This is oversimplification, but you get the point.

Right from understanding the patterns in the cycles of days and nights, summers, and winters till the patterns in movements of the celestial bodies, the vibration of atoms, we have had many breakthroughs in the “pattern recognition”. If one is successful enough to develop a structured and objective reasoning behind such patterns, then predicting the fate of any process happening (and would be happening) which follows that pattern is a piece of cake. Thus, the power to see the patterns in the randomness is kind of a superpower that we humans possess. It’s like a crude version of mini-time machine.

Randomness inherently means that it is difficult to make any sense of the given condition, we cannot predict it effectively. Mathematics is one powerful tool to make sense out of randomness but bear in mind that not every randomness could be handled effectively with the mathematical tools we have at our disposal today. Mathematics is still evolving and will continue to evolve and there is not end to this evolution – we will never know everything that is there to know. (it’s not a feeling rather it is proved by Gödel’s incompleteness theorem.)

You must also appreciate that to see the patterns in any given randomness, one needs to create a totally different perspective. Once this perspective is developed then it no longer remains random. So, every randomness is random until we don’t have a different perspective about it.

So, is there any way to have a perspective on the gazillions of the numbers getting generated around us during transactions, interactions, transformations?

The answer is Yes! Definitely, there is a pattern in this randomness!!

Today we will be seeing that pattern in detail.

Natural Series – Real Life Data       

Take your account statement for an example. You will see all your transactions, debit amount, credit amount, current balance in the account. There is no way to make sense out of how the numbers that are generated, the only logic behind those numbers in account statement is that you paid someone certain amount and someone paid you certain amount. It is just net balance of those transactions. You had certain urgency someday that is why you spent certain amount on that day, you once had craving for that cake hence you bought that cake, you were rooting for that concert ticket hence you paid for that ticket, on one bad day you faced certain emergency and had to pay the bills to sort things out. Similarly, you did your job/ work hence you got compensated for those tasks – someone paid you for that, you saved some funds in deposits and hence that interest was paid to you, you sold some stocks hence that value was paid to you.

The reason to explain this example to such details is to clarify that even though you have control over your funds, you actually cannot control every penny in your account to that exact number that you desire. This is an example of natural data series. Even though you have full control over your transactions, how you account will turn out is driven by certain fundamental rules of debit/ credit and interest. The interactions of these accounting phenomenon are so intertwined that ultimately it becomes difficult to predict down to every last penny.

Rainfall all around the Earth is very difficult to predict to its highest precision due to many intermingling and unpredictable events in nature. So, by default finding trend in the average rainfall happened in given set of places is difficult. But we deep down know that if we know certain things about rainfall in given regions we can make better predictions about other regions in a better way, because there are certain fundamental predictable laws which govern the rainfall.  

The GDP of the nations (if reported transparently) is also very difficult to pin down to exact number, we always have an estimate, because there are many factors which affect that final number, same goes for the population, we can only predict how it would grow but it is difficult to pin point the number.

These are all examples of real life data points which are generated randomly during natural activities, natural transactions. We know the reason for these numbers but as the factors involved are so many it is very difficult to find the pattern in this randomness.

I Lied – There is A Pattern in The Natural Randomness!

What if I told you that there is certain trend and reference to the randomness of the numbers generated “naturally”? Be cautious – I am not saying that I can predict the market trend of certain stocks; I am saying that the numbers generated in any natural processes have preference – the pattern is not predictive rather it only reveals when you have certain bunch of data already at hand – it is retrospective.

Even though it is retrospective, it can help us to identify what was manipulated, whether someone tried to tamper with the natural flow of the process, whether there was a mechanical/ instrument bias in data generation, whether there was any human bias in the data generation?

Logarithm and Newcomb

Simon Newcomb (1835-1909) a Canadian-American astronomer once realized that his colleagues are using the initial pages of log table more than the other pages. The starting pages of log tables were more soiled, used than the later pages.

Simon Newcomb

Log tables were instrumental in number crunching before the invention of any type of calculators. The log tables start with 10 and end in 99.

Newcomb felt that the people using log tables for their calculations have more 1’s in their datasets repetitively in early digits that is why the initial pages where the numbers start with 1 are used more. He also knew that the numbers used in such astronomical calculations are the numbers available naturally. These numbers are not generated out randomly, they signify certain quantities attributed to the things available in nature (like diameter of a planet, distance between stars, intensity of light, radius of curvature of certain planet’s orbit). These were not some “cooked up” numbers, even though they were random but they had natural reason to exist in a way.

He published an article about this but it went unnoticed as there was no way to justify this in a mathematical way. His publication lacked that mathematical rigor to justify his intuition.

Newcomb wrote:

“That the ten digits do not occur with equal frequency must be evident to anyone making much use of logarithmic tables, and noticing how much faster the first one wears out than the last ones.”   

On superficial inquiry, anyone would feel that this observation is biased. It seemed counterintuitive, also Newcomb just reported the observation and did not explain in detail why it would happen. So, this observation went underground with the flow of time.

Frank Benford and The Law of Anomalous Numbers

Question – for a big enough dataset, how frequently any number would appear in first place? What is the probability of numbers from 1 to 9 to be the leading digit in given dataset?

Intuitively, one would think that any number can happen to be in the leading place for given dataset. If the dataset becomes large enough, all nine numbers will have equal chance to be in first place.

Frank Benford during his tenure in General Electric as a physicist made same observation about the log table as did Newcomb before him. But this time Frank traced back the experiments and hence the datasets from these experiments for which the log table was used and also some other data sets from magazines. He compiled some 20,000 data points from completely unrelated experiments and found one unique pattern!

Frank Benford

He realized that even though our intuition says that any number from 1 to 9 could appear as the leading digit with equal chance, “natural data” does not accept that equal chance. The term “Natural data” refers to the data representing any quantifiable attribution of real phenomenon, object around us, it is not a random number created purposefully or mechanically; it has some origin in nature however random it may seem.

Frank Benford thus discovered an anomaly in natural datasets that their leading digit is more 1 or two than the remaining ones (3,4,5,6,7,8,9). In simple words, you will see 1 as leading digit more often in the natural datasets than the rest of the numbers. As we go on with other numbers the chances that other numbers will be frequent in leading position are very less.

In simple words, any naturally occurring entity will have more frequent 1’s in its leading digits that the rest numbers.

Here is the sample of the datasets Frank Benford used to find this pattern:

Dataset used by Frank Benford in his 1938 paper “The Law of Anomalous Numbers”

So, according to Benford’s observations for any given “natural dataset” the chance of 1 being the leading digit (the first digit of the number) is almost 30%. 30% of the digits in given natural dataset will start with 1 and as we go on the chances of other numbers to appear frequent drop drastically. Meaning that very few number in given natural data set will start with 7,8,9.

Thus, the statement of Benford’s law is given as:

The frequency of the first digit in a populations’ numbers decreases with the increasing value of the number in the first digit.

Simply explained, as we go on from 1 to 9 as first digit in given dataset, the possibility of their reappearance goes on reducing.

1 will be the most repeated as the first number then 2 will be frequent but not more than 1 and the frequency of reappearance will reduce and flatten out till 9. 9 will rarely be seen as the leading digit.

The reason why this behavior is called as Benford’s Law (and not Newcomb’s Law) is due to the mathematical equation that Benford established.

Where, P(d) is the probability that a number starts with digit d. Digit d could be anything 1,2,3,4,5,6,8 or 9.

If we see the real-life examples, you will instantly realize how counterintuitive this law is and still nature chooses to follow it.

Here are some examples:

I have also attached an excel sheet for complete datasets and to demonstrate how simply one can calculate and verify Benford’s law.

Population of countries in the world –

The dataset contains population of 234 regions in the world. And you will see that 1 appears the most as first digit in this dataset. Most of the population numbers start with 1 (70 times out of 234) and rarely with 9 (9 times out of 234)

Country-wise average precipitation –

The dataset contains average rainfall from 146 countries in the world. Again, same pattern emerges.

Country wise Gross Domestic Product –

The dataset contains 177 countries’ GDP in USD. See the probability yourself:

Country-wise CO2 emissions:

The data contains 177 entries

Country wise Covid cases:

Here is one more interesting example:

The quarterly revenue of Microsoft since its listing also shows pattern of Benford’s Law!

To generalize we can find the trend of all these data points by averaging as follows:

This is exactly how Benford avearaged his data points to establish a generalized equation.

Theoretical Benford fit is calculated using the Benford equation expressed earlier.

So here is the relationship graphically:

Now, you will appreciate the beauty of Benford’s law and despite seeming counterintuitive, it proves how seemingly random natural dataset has preferences.

Benford’s Law in Fraud Detection

In his 1938 paper “The Law of Anomalous Numbers” Frank Benford beautifully showed the pattern that natural datasets prefer but he did not identify any uses of this phenomena.

1970 – Hal Varian, a Professor in University of California Berkely School of Information explained that this law could be used to detect possible fraud in any presented socioeconomic information.

Hal Varian

1988 – Ted Hill, an American mathematician found out that people cannot cook up some numbers and still stick to the Benford’s Law.

Ted Hill

When people try to cook up some numbers in big data sets, they reflect certain biases to certain numbers, however random number they may put in the entries there is a reflection of their preference to certain numbers. Forensic accountants are well aware of this fact.    

The scene where Christian pinpoints the finance fraud [Warner Bros. – The Accountant (2016)]

1992 – Mark Nigrini, a South African chartered accountant published how Benford’s law could be used for fraud detection in his thesis.

Mark Nigrini

Benford’s Law is allowed as a proof to demonstrate accounts fraud in US courts at all levels and is also used internationally to prove finance frauds.

It is very important to point the human factor, psychological factor of a person who is committing such numbers fraud. People do not naturally assume that some digits occur more frequently while cooking up numbers. Even when we would start generating random numbers in our mind, our subconscious preference to certain numbers gives a pattern. Larger the data size more it will lean to Benford’s behavior and easier will be the fraud detection.

Now, I pose one question here!

If the fraudster understands that there is such thing like Benford’s Law, then wouldn’t he cook up numbers which seem to follow the Benford’s Law? (Don’t doubt my intentions, I am just like a cop thinking like thieves to anticipate their next move!!!)

So, the answer to this doubt is hopeful!

The data generated in account statements is so huge and has multiple magnitudes that it is very difficult for a human mind to cook up numbers artificially and evade from detection.

Also, forensic accountants have showed that Benford’s Law is a partially negative rule; this means that if the law is not followed then it is possible that the dataset was tampered/ manipulated but conversely if the data set fits exactly / snuggly with the Benford’s law then also there is a chance that the data was tampered. Someone made sure that the cooked-up data would fit the Benford’s Law to avoid doubts!

Limitations of Benford’s Law

You must appreciate that nature has its ways to prefer certain digits in its creations. Random numbers generated by computer do not follow Benford’s Law thereby showing their artificiality.

Wherever there is natural dataset, the Benford’ Law will hold true.

1961 – Roger Pinkham established one important observation for any natural dataset thereby Benford’s Law. Pinkham said that for any law to demonstrate the behavior of natural dataset, it must be independent of scale. Meaning that any law showing nature’s pattern must be scale invariant.

In really simple words, if I change the units of given natural dataset, the Benford law will still hold true. If given account transactions in US Dollars for which Benford’s Law is holding true, the same money expressed in Indian Rupees will still abide to the Benford’s Law. Converting Dollars to Rupees is scaling the dataset. That is exactly why Benford’s Law is really robust!

After understanding all these features of Benford’s Law, one must think it like a weapon which holds enormous power! So, let us have some clarity on where it fails.

  1. Benford’s Law is reflected in large datasets. Few entries in a data series will rarely show Benford’s Law. Not just large dataset but the bigger order of magnitude must also be there to be able to apply Benford’s Law effectively.
  2. The data must describe same object. Meaning that the dataset should be of one feature like debit only dataset, credit only dataset, number of unemployed people per 1000 people in population. Mixture of datapoints will not reflect fit to Benford’s Law.
  3. There should not be inherently defined upper and lower bound to the dataset. For example, 1 million datapoints of height of people will not follow Benford’s Law, because human heights do not vary drastically, very few people are exceptionally tall or short. This, also means that any dataset which follows Normal Distribution (Bell Curve behavior) will not follow Benford’s Law.
  4. The numbers should not be defined with certain conscious rules like mobile numbers which compulsorily start with 7,8, or 9; like number plates restricted 4, 8,12 digits only.
  5. Benford’s Law will never pinpoint where exactly fraud has happened. There will always be need for in depth investigation to locate the event and location of the fraud. Benford’s Law only ensures that the big picture is holding true.

Hence, the examples I presented earlier to show the beauty of Benford’s Law are purposefully selected to not have these limitations. These datasets have not bounds, the order of magnitude of data is big, range is really wide compared to the number of observations.     

Now, if I try to implement the Benford’s Law to the yearly revenue of Microsoft it reflects something like this:

Don’t freak out as the data does not fully stick to the Benford’s Law, rather notice that for the same time window if my number of datapoints are reduced, the dataset tends to deviate from Benford’ Law theoretically. Please also note that 1 is still appearing as the leading digit very frequently, so good news for MICROSOFT stock holders!!!

In same way, if you see the data points for global average temperatures (in Kelvin) country-wise it will not fit the Benford’s Law; because there is no drastic variation in average temperatures in any given region.

See there are 205 datapoints – big enough, but the temperatures are bound to a narrow range. Order of magnitude is small. Notice that it doesn’t matter if I express temperature in degree Celsius of in Kelvins as Benford’s Law is independent of scale.

Nature Builds Through Compounded Growth, Not Through Linear Growth!

Once you get the hold of Benford’s law, you will appreciate how nature decides its ways of working and creating. The Logarithmic law given by Frank Benford is a special case of compounded growth (formula of compound interest). Even though we are taught growth of numbers in a periodic and linear ways we are masked from the logarithmic nature of the reality. Frank Benford in the conclusion of his 1937 paper mentions that our perception of light, sound is always in logarithmic scale. (any sound engineer or any lighting engineer know this by default) The growth of human population, growth of bacteria, spread of Covid follow this exponential growth. The Fibonacci sequence is an exponential growth series which is observed to be at the heart of nature’s creation. That is why any artificial data set won’t fully stick to logarithmic growth behavior. (You can use this against machine warfare in future!) This also strengthens the belief that nature thinks in mathematics. Despite seemingly random chaos, it holds certain predictive pattern in its heart. Benford’s Law thus is an epitome of nature’s artistic ability to hold harmony in chaos!  

You can download this excel file to understand how Benford’s law can be validated in simple excel sheet:

References and further reading:

  1. Cover image – Wassily Kandinsky’s Yellow Point 1924
  2. The Law of Anomalous Numbers, Frank Benford, (1938), Proceedings of the American Philosophical Society
  3. On the Distribution of First Significant Digits, RS Pinkham (1961), The Annals of Mathematical Statistics
  4. What Is Benford’s Law? Why This Unexpected Pattern of Numbers Is Everywhere, Jack Murtagh, Scientific American
  5. Using Excel and Benford’s Law to detect fraud, J. Carlton Collins, CPA, Journal of Accountancy
  6. Benford’s Law, Adrian Jamain, DJ Hand, Maryse Bйeguin, (2001), Imperial College London
  7. data source – Microsoft revenue – stockanalysis.com
  8. data source – Population – worldometers.info
  9. data source – Covid cases – tradingeconomics.com
  10. data source – GDP- worldometers.info
  11. data source – CO2 emissions – worldometers.info
  12. data source – unemployment – tradingeconomics.com
  13. data source – temperature – tradingeconomics.com
  14. data source – precipitation – tradingeconomics.com

Riding on the ‘Hype Wave’ of Technological Breakthroughs

Countless breakthroughs are happening around the globe everyday but very few hold the potential to change the world and the future. Due to our cognitive limitations and biases, we wrongly estimate the impact of the emerging innovations in near future and long-term future. Amara’s Law and ideas of Hype Cycle can provide insights into how a technology evolves over time from its emergence and how to spot the technology which truly holds the potential to change the future course of the humanity. Understanding the phases in the development of an innovation and its coherence to reality can help entrepreneurs, researchers, investors, policymakers and even a common man to have practical expectations from any technological breakthrough.

How to gauge the trend and acceptance of any emerging technology?

The Wheel and The Fire

One of the key differentiating factors that created a totally separate path for the evolution of humans from the apes and other species is the invention of tools. Right from the invention of the wheel and the fire to the invention of steam engines to the invention of the computers, smartphones to the invention of artificial intelligence – our tools to interact with the world around us keep on getting more and more sophisticated thereby uplifting our lifestyle. This is way different than how other species live, interact with the world around them and exist on the earth – our home, their home.

Any sufficiently advanced technology is equivalent to magic

Arthur C Clarke

Imagine if you traveled back in time and showed ancient Egyptians a smartphone? You may explain it to them as a tablet sitting in hands made up of the components built from sand which could show you the extremely detailed real-time pictures from the location far from their locations or you can send your voice and receive the voice from the other side. Given that Egyptians created some of the astounding engineering marvels that world has ever seen, even after that a smartphone will be equivalent of a pure magic for them.

Now coming back to the future- present times, if you are told that a company has built a device which can teleport you instantly to other planet, what would be your reaction? Though seemingly magical, it is just a fiction for us due to practical reasons. We haven’t even teleported an insect from point A to point B till date.

The Think Tanks For The Future

So, there is an limit to realize the practicality of every technology. Many technological breakthroughs are happening all around the world every day but very few of them actually change the course of the humanity. That is why it is very important to identify which technology really holds the true potential. The earlier one realizes potential of the technology, the faster and bigger will they have the grasp over the world, politics, economy and society thereby maybe the whole humanity.

Many think tanks around the worlds are always invested in the prediction of the future scenarios in global politics, war fares, technological breakthroughs, epidemics. These people are called as ‘the futurists’ who are striving to predict the long-term future obviously for the sake of readiness, survival and sometimes dominance. These futurists have certain tools in their kit which can help us to understand and point out which next technological revolution will change the course of the humanity in near future. Although we are not experts of all technologies and breakthroughs happening in the world daily, these tools can help us to understand which technology can actually benefit and uplift our daily lives. These tools can help an innovation manager to point out the technology which can create differentiation to his product in the market, these tools can help an entrepreneur to select the technology to boost his/her business or startup. This can also help a common man to gauge whether a technology claimed by a company can actually bring a difference in his life.

There are these questions we keep on asking ourselves when we are dealing with new technologies coming to our doorsteps, into our hands –  

Why Apple brings the technologies in their phones really late when every other device manufacturer has already made it mainstream and sometimes obsolete?

Why AI won’t actually take over the world in near future? Why EV’s may face a cold death? Why flying cars are still not practical and common product as predicted in back to the future? Why teleportation is not a reality and only a magical part of the today’s Science fiction?

What is the future of the innovations and breakthroughs we are making every day? Is there any way to predict the future value of certain emerging technology and built a product around it to create a fruitful venture/ business? How to be confident while investing in any technology based on its current condition or stage?

Amara’s Law from the futurist Roy Amara gives a deep insight into the phases every technology goes through and it also helps us to make any decision for given technology.

Influence Of Technological Breakthroughs On Humanity And Its Future

Roy Charles Amara was the president of Institute for the future (ITFT) an American non-profit think tank which works for the better prediction of long-term future. The prospects of ITFT include the exploration for the possible futures for the USA and the world, to figure out the preferences/desirability of these futures and to increase the chances to bring that future possibility into reality by finding out ways that support it.

Roy Amara gave a very important insight into how we as the human beings perceive future of any technological breakthrough and most of the times the masses are wrong about these technological breakthroughs.

Amara’s Law

“We overestimate the power and effects of the technology for a short period of time, while we underestimate the power of technology for a long period of time.”

Roy Charles Amara
Roy Charles Amara

In simple words, our expectations (positive or negative) from recent technological breakthroughs are always high and we are very skeptical about already existing technologies to be revolutionary and mainstream in the future.

Graphical representation of Amara’s law

Take for example the LHC experiments which were started in CERN to better understand the subatomic particles. Some scientists predicted that there may be chances that some small black holes may form in this experiment which would engulf the whole earth into it. Same fear was lingering around the Trinity test, where many scientists thought that the Atomic Bomb test would initiate a chain reaction which could ionize the whole atmosphere of the earth thereby leading to the end of the humanity.

But, look what happened? Nothing dangerous happened during the LHC runs, the runs confirmed the presence of Higgs Boson. Although atom bomb proved to be really fatal and formidable invention of humanity, the Trinity test at Los Alamos didn’t ionize the earth’s atmosphere.

In our times, people are speculating that the AI will take over humanity and will rule over the world by enslaving everyone. Now look what ChatGPT responds when you ask some fundamental philosophical questions? (Although it is excellent in certain tasks but there is still long way to go in order take over the humanity!) There are some examples where the AI image generator could not create proper images of human hands as the orientation of fingers relative to each other is “confusing” for the AI image engine. It is also known that certain biases can be created in the AI engine based on the sample training data provided, so there is still a long way for the AI to catch up with the humanity and there is no doubt that AI will totally revolutionize our lives but, in the ways, we are yet to imagine or grasp. (although AI has already revolutionized some parts of our lives already)

Amara’s Law points out a cognitive limitation in us where it is really difficult for us to predict the non-linear behavior of technology in the coming future. We human beings are very great in predicting the linear incremental behavior, somewhat ideal and constant incremental behavior of things around us. The moment we infuse multiple variables and some non-ideality in these predictions, we make wrong decisions based on our survival instincts. We “overhype” the technology’s potentials. For given technology people may think that it will revolutionize the ways of doing things and uplift the society, some think that certain technology will take people’s jobs and push society into dystopia. Look what is happening with cryptocurrency and NFT – the technologies which were supposed to revolutionize the complete world economy. Although, blockchain is there to remain forever as an excellent invention it will change the world in totally different way than people actually predicted.   

Amara’s Law is famously explained by the S-shaped curve to represent the difference between anticipated impact and actual impact of the technological breakthroughs.

Gartner Hype Cycle

Gartner Hype Cycle also throws light on some interesting concepts on the actual non-linear impact of technologies over the time. This idea was developed by an American consultancy firm Gartner Inc. named after Gideon Gartner who is called as ‘the father of the modern analyst industry’.  Gartner Hype Cycle establishes certain phases in the implementation, growth and acceptance of any technology.  

These phases are given as follows:

  1. Innovation/ Technology trigger – A new technology is presented to the world which creates intrigue, sometimes fear in the minds of the masses. The competitors panic sometimes for the probable upcoming uncertainties in business
  2. Peak of inflated expectations – As the technology is something new, there are very few experts to truly understand it. The hype build around it due to the insufficient knowledge of the media and communicators. This builds unrealistic expectations among the masses.
  3. Trough of disillusionment – There comes a time when this hyped technology starts getting implemented into real life where practical limitations keep piling up. Not only practical but also some economical, social problems start peeking this pile of problems. The expectations were already high and when such failures start becoming apparent to the mass users, the technology enters the rock bottom, the cold death
  4. Slope of enlightenment – After remaining in the abyss of failures there comes a time when the exact technology finds a better purpose for implementations; it’s newer generation become more people relevant more practical which people accept properly, where the society is evolved enough to accept it as their way of life. From here on this technology enters ‘the plateau of productivity’. The true value and proper points to implement the technology are identified and widely accepted in this plateau.   
The Gartner Hype Cycle

Many experts in the industry critic Gartner hype cycle as it does not provide any instructions or actions to control these behaviors for any emerging or disruptive technological revolution. One can safely say that Gartner Hype Cycle gives a generalized view on acceptance of new technology.

There is also an Extended Gartner Hype Cycle where after the plateau of productivity, the technology loses its value to the reduced returns to the business over time which further ends into the obsolescence – “the cliff of obsolescence”.

Key Takeaways From The Hype Cycle

When a policy maker, an entrepreneur, a manager understands the Gartner Hype Cycle, it will definitely help them to make informed decisions which can reduce risks and maybe save many lives.

Being patient and not getting tempted to ride on the hype wave is the first important response.

Updating the knowledge and current trends regularly will help in creating fruitful strategies against the hype.

Change is the only thing that remains forever and embrace this. Adaptations to stay ahead in the practical technology is the optimum move for any leader/ policymaker.

Understanding the long-term viewpoint from a sustainable perspective with the closeness to reality/ practicality immediately breaks the illusion of hype.  

It is really important for a policymaker, a leader and even a common man to understand that creating a breakthrough does not guarantee the success in the practicality of the innovation/ technology. The innovation even though called as breakthrough has to be practical, relevant and realistic.  

The Gartner Hype Cycle can explain why some tech companies wait for the technology to evolve and establish in order to deliver complete consumer experience. The hype cycle also explains why many startups who have found breakthroughs initially, fail to deliver at the end as the hype wave builds unnecessary expectations among the investors. ‘Edison’ by ‘Theranos Inc.’ founded by Elizabeth Holmes is one such example. The company was expected to create revolution in the medical diagnostic industry and was touted to be ‘the iPhone’ of the medical and healthcare industry. Look what has happened after that!

Internet and GPS (Global Positioning System) are the examples which were supposed to remain military secrets for years eventually have become the walk and talk of everyday lives and influence every part of our life now. (There was a time called dot com bubble which reiterates the hype of internet companies!)

Once you understand the Amara’s Law and Gartner Hype Cycle you can clearly see how any new technology launched in the market will behave in near future. It is not just about creating disruptive innovation to the market; it is also about solving realistic problems and understanding the key pain points of the masses.

Roy Amara’s Futurist Legacy for Predicting Breakthroughs

Any innovation which will truly impact the future should be studied for three main parameters/ premises.

  1. The possible – as the breakthroughs are practically ‘the trend breakers’, the study for their possibility should involve unconventional approaches that defy formalization/ structured-ness. There should also be some human element of intuition which gives a personal touch to such innovations. For example, the Science Fiction authored by a well-versed scientist/ artist who understands its practical limitations for today but anticipates that it will get solved in near future. (The motion capture technology evolved during the creation of James Cameron’s Avatar is one such good example)  
  2. The probable – defining probability requires to understand ‘what is connected to what?’ What action will increase the chances of certain event? Thus, the process of quantifying the probability of the success of the breakthrough innovation immediately establishes the chain of reasoning to its future projection.
  3. The preferable – Even though the innovation is possible and most probable but if it is not required by the time and the society then it may surely face the cold death. So, prefer-ability anticipates the societal, economical, and humanistic aspects to accept the innovation. If the innovation has no net positive utility, then it won’t come out as the most probable future. (For example, even though we know that the Grade 5 Titanium is one of the lightest and the strongest materials in the mechanical world, we also know that people can’t afford the cars made out of it for daily use as the costs of manufacturing will be exceptionally high which ultimately will get transferred to end users who won’t pay for it – even when it comes to saving their lives for the amount they invest.)

This makes it very clear that an innovation needs to be realistically possible, the most probable and most preferable in order to be called as a breakthrough which holds the potential to change the course of humanity.

These are the exact reasons why people over-anticipated the trip to the Mars when Elon Musk expressed his SpaceX proposition. Now that we are seeing how difficult it is to create a rocket, how many resources, how many allied innovations need to happen, how many relevant financial, behavioral mindsets need to evolve in order to send few humans (alive!) to Mars then we are getting hold of the practicality behind sending humans to settle on Mars.

This also explains why the flying cars shown in Back to the Future are not a common reality or way of life today, it still will take time or maybe it won’t happen in future due to some other breakthroughs (like teleportation!). Back to the future successfully predicted 3D projections, Video calling, digital currency, smartwatch which were possible due to practicality and relevance.  

Once you understand the Amara’s Law you can grasp that creating many innovations is not important to change the course of future, a single innovation which is practical, relevant and realistic is sufficient enough to change the course of the humanity. One can also do so by making the innovations implementable in real life,not every breakthrough guarantees immediate revolution.

“Our nations rely on innovation to improve productivity and fuel economic growth.  But to be competitive, nations and organizations do not necessarily have to excel at originating innovation—they have to be able to apply innovation successfully.”

Mastering the Hype Cycle by Jakie Fenn and Mark Raskino

References and further reading:

  1. Mastering the Hype Cycle – How to choose the right innnovation the right time by Jakie Fenn and Mark Raskino, Harvard Business Press
  2. Here’s Why AI Is so Awful at Generating Pictures of Humans Hands
  3. Why Are AI-Generated Hands So Messed Up?
  4. Views on futures research methodology – An essay by Roy Amara, FUTURES July/August 1991