Often the ideas in our heads start off as unfocused images. As we continue to study ours thoughts, things come into view. The ideas start getting crystallized out from the melt. The nebulous starts becoming solid. This blog contains those thoughts as they develop and become planets in our minds.
There is no doubt that mental health awareness is an extremely important area of development for wellbeing. When we’re implicitly led to believe by society that mental health is something “only the weak have,” we are unable to address the problem of our own mental health and we will forever be plagued by it. Like saying to someone with a broken leg, “only the weak have broken legs,” and “it’s a state of mind, just get over it”: that person will be encouraged to ignore their leg. They’ll be embarrassed to talk about managing it with their friends and they’ll be reluctant to treat it. Their leg will never set correctly and they’ll be walking with an impairment the for rest of their life. All of this is preventable, or at worst, manageable: by recognising that there is a problem that needs to be unashamedly addressed and by taking the appropriate steps to correct it.
However, there’s an issue with mental health awareness that I feel will surface in the next few years. Awareness and appreciation is the first step for society, but it might make things worse before it makes things better for individuals with anxiety. “Why?” you ask. Hear me out.
On Polar Bears
Don’t think of a polar bear.
Pretty hard, isn’t it? The more we tell ourselves, “don’t think of a polar bear,” the more we entrench ourselves into a recurring thought process, where we get more and more desperate to not focus on the things we’re focusing on. We focus on what we don’t want to focus on by telling ourselves… not to focus on it. But there is a way to win this game. At least, in the long term.
In the first few seconds of the game, there is no way to win. To understand the requirements of the game, we need to recognise what a polar bear is, so that we know what not to focus on. We also need to focus on that object, so that we know what not to focus on. There is no way to win the game if we are to understand its requirements in the first few seconds.
Once we’ve understood the requirements, though, we can employ a different strategy: just think of anything else instead. Think of an orange tree. Think of a cloudy sky. This is the only way to win the game.
The issue of mental health is harder than the polar bear game. On one hand, we need to address the issue so that we can manage it. And we need to do this for society, not just one individual. This will blur the lines as to when to switch strategies. On the other, the awareness of the issue can compound the problem.
Anxiety is an especially tricky subject. By being aware of our anxiety, we can become anxious about that very awareness of anxiety. Like an electric guitar that picks up the noise from its own amplifier, this can lead to a negative feedback loop: noise creates more noise, which creates more noise, getting louder and louder until it’s a deafening scream. And mental health awareness days encourage this negative feedback loop to take place. So how do we overcome this?
Mindfulness is one great tactic because it uses the same strategy as the polar bear game. Instead of saying, “don’t be anxious” – and getting anxious about being anxious – it reverses the narrative. It makes us focus on how we’re grounded and calm through different strategies. My personal favourite is visualisation strategies. “Imagine your mind as a crystal calm lake. It has a mirror finish to its surface. You can let a single rain-drop fall and watch the waves flow out from the impact point completely unobstructed.”
I also think we should have a day which celebrates all our positive mental health attributes. At the moment, mental health day focuses on the negative attributes of mental health: and this is necessary in the first step. We can’t win the anxiety game until we become aware of what anxiety is and learn the appropriate steps to manage it. But after that, we need to change our strategy: now that we are aware of our anxiety we don’t need to focus on it anymore. We can focus on the strategy to overcome it, which includes focusing the positive opposites of that trait. By celebrating instances where we were calm, instances where we were confident, we will remind ourselves that we are able to overcome anxiety. In doing so, this time we’ll create a positive feedback loop: where reminders of instances of when we were calm will facilitate our ability to be calm in the future… which give us more reminders.
The Myers Briggs test is a personality test that is based on categorising people into four sets of binaries. For example, introversion or extraversion is one group, and then it does this three more times to cover four different aspects of personality in total. Although it doesn’t have much authority in the world of serious psychology because it’s not repeatable (i.e. you can do the test once and it’ll class you as an ENTP, do it again and you’ll be a completely different INFJ. In fact, the only personality signifiers that hold weight in psychology due to repeatability are “the big five”) it’s still very popular. I think the reason for this popularity is that it gives us insight into different ways in which people think, allowing us to see the world in a way that would otherwise go unnoticed to us.
That’s not to say we’re not capable of seeing things from the other person’s perspective, it’s just that we tend to favour looking at things from one world view to the other if possible. However, I also think that a given situation will affect which side you choose to take within that specific time frame. For example, you might be drawn towards the extraverted side when you’ve not socialised for a while and want your fix (which has now been termed “ambiversion” – you show signs of both). I think that this is ultimately the downfall of the test and why the test isn’t repeatable.
Today I’d like to examine the 4th category of the Myers Briggs test: categorising into either ‘judging’ or ‘prospecting’. I think this category is perhaps the most obvious example of how we need be capable of reacting to the world as either a ‘judger’ or ‘prospector’, depending on the situation. If we weren’t capable of doing both, we wouldn’t be able to function in the world. Let’s just look at both categories quickly before jumping in.
Judging is the strategy of making plans, and then carrying that plan out, to achieve a goal. Prospecting is the strategy of ‘going with the flow’, being present in the current situation and reacting to it in a way that moves you towards the desired outcome. I’m definitely predominantly a ‘judger’ and have tended towards tasks that prefer this strategy throughout my life. But I’m always interested in people who prefer the “prospecting” side of things.
Daniel Kahneman won the Nobel prize in Economics for work in this area, describing his findings in the book Thinking, Fast and Slow. If anyone has read the book, the ‘prospecting’ strategy can be loosely thought of as what Kahneman described as “System 1”, and the ‘judging’ strategy can be loosely thought of as “System 2.” Let’s have a look at each strategy in a bit more detail, and see when each one is applicable.
There’s just one problem with people who structure their life predominantly on planning, and Mike Tyson summed it up with surprising eloquence.
“Everyone has a plan ’til they get punched in the mouth” – Mike Tyson
You might have the best-laid plans, but they only work in an environment that either doesn’t change, or if it does change: it won’t influence the outcome of the plan. If either of those conditions isn’t true, ‘judging’ strategy is useless. The plan will become obsolete before you even start to carry it out. In situations like sports, complex board games (like chess, where there are 8,902 different possible board positions after only 2 moves), and even everyday normal conversations – where you don’t know which way the conversation is going to go – you need a different strategy. There’s no possible way we can play enough games of chess so that we can have a plan for each situation, or have a plan for every possible conversation. So what can we do?
In Thinking, Fast And Slow, Kahneman described the strategy of heuristics to deal with a constantly changing environment. Heuristics can be seen as quick “rules of thumb” or “common sense” – basically they’re rules to follow that can be applicable to either a general situation (although the specifics may differ), or a by reacting to only certain sub-section of the environment (that will have the most impact) and then ignore the rest (which may still have an impact, but a lesser one). In real life, this is applied by trusting yourself to recognise the important factors in the current environment, and then to make a good snap decision based on having dealt successfully with a familiar situation in the past. The computer program Deep Blue used a heuristic-like strategy – by comparing a library of previous chess games to the game it was playing, finding all relevant (based on board similarity) examples in the library and picking the move that had the best outcome out of the given examples – to be the first computer to beat the chess Grand-Master, Kasparov, in 1996. The trick here is to know what aspects of the board are important: this is the spine of the algorithm that searches for relevant examples.
Another element of sports is to ingrain a specific movement until it almost becomes unconscious: an unconscious competence. This way, when the time comes, we do it automatically. There is no planning, just unconscious reaction. Writing about the philosopher, Marcus Aurelius, it was said (in the introduction to “Meditations”):
“But Marcus Aurelius knows that what the heart is full of, the man will do. ‘Such as thy thoughts and ordinary cogitations are’, he says, ‘such will thy mind be in time’. And every page of the book shows us that he knew thought was sure to issue in act. He drills his soul, as it were, in right principles, so that when the time comes, it may be guided by them. To wait until the emergency is to be too late”
Finally, one strategy to work in a prospecting-regime is just to be quick-witted. This works better for some than others (and I’m jealous of those who it works well for!)
In the work-place, there are multiple methodologies that work from a prospecting strategy. The most popular one is the software engineering methodology, Agile, which has become popular in the last 10 years. Agile has become so popular because customer requirements change frequently, but sometimes it takes a long time to write the code to create functions that satisfy those requirements. To overcome these issues, Agile methodology is an interesting hybrid between a prospecting mindset and a judging mindset. It works from a set of principles designed to get the code written as quickly as possible during periods known as ‘sprints’. During each sprint, the code structure is planned and then written based on the requirements (this is the judging bit). After each sprint, the code is compared to requirements and another small sprint is planned based on a reaction to the new environment (this is the prospecting bit).
Some tasks, however, are just too complex to be undertaken by a prospecting strategy, especially those in project engineering that take many years. Countless companies have gone bankrupt by not planning well enough in the beginning phases, only to find out half-way down the line that there was a critical element they overlooked, which means that sub-components designed by different departments aren’t compatible (one example from a countless number), causing them to scrap the whole project and start again. By the time they’re half-way finished through the second iteration: the project is years late and the company has run out of money.
Of course, we can’t plan for things that we can’t see we don’t know (unknown unknowns), but we can do many other things that may bring possible complications to the surface and thus we can plan for them accordingly. The first step of solving a problem is realising there’s a problem.
A popular engineering methodology that uses the “judging”/planning mindset is Systems Engineering: where the emphasis is placed on the beginning stages of a project, in which requirements are clearly set and then planned on how to fulfil them, with each department involved. In this way, and basing the planning on aspects that were covered in previous successful projects, unknown unknowns can be minimised through a number of different micro-strategies and mid-project problems can be mitigated.
I could write more about different ways the “judging” strategy is useful, and the different ways we can use it, but many books have already been written about it (countless systems engineering handbooks, or project management training books, for example) and, really, I think it’s the less interesting of the two. I feel that the “prospecting” mindset is usually overlooked, especially among engineers like myself. I think engineers who are used to planning don’t recognise the skill involved in being a proficient “prospector”, as a result, we tend to underestimate the skills needed in those departments: for example, technical sales, where a lot of work is done in conversation and understanding quickly how to react to different environments… which is why I find the prospecting perspective so interesting! Ultimately, though, we need to understand and be proficient at both mindsets, and then apply them accordingly to each situation.
This blog post relies on you knowing what the 4 stages of competence are. There are loads articles written about this model, so I’m not going to do it here. You can try the Wikipedia page if you don’t know what it is.
Moving from unconscious incompetence (UI) to conscious incompetence (CI) is a sign that you’re starting to grasp the full extent of work needed to gain mastery in a skill. It’s a good thing: you now understand what needs to be worked on and developed so that you can gain mastery. However, I’ve realised that there’s something else that happens to me – and a lot of other people – when we step from UI to CI.
When we move from UI to CI, we get complacent. I’ve seen this happen time and time again with people who understand certain domains of skill to an adequate level. Take professional sports, for example. The newcomers look at table tennis and think, “I can totally hit that ball with that paddle (yeh, guys, it’s called a paddle not a bat, fyi) like those dudes on TV”. They’re excited to try something new and they have self-belief that they can gain mastery if they work hard. The guys who have already established themselves look at these newcomers derisively, knowing that the newcomer has vastly underestimated the work needed to achieve mastery. “You think you can become the best?”, they ask in their heads. “Ha. No chance.”
Fast forward to having spent 100 hours in that domain, and now the newcomer is 10x more skilful, but they’re 10x more complacent. They haven’t progressed as quickly as they expected they would and they now realise the extent of the work needed. It’s like they’re going backwards, psychologically: the more hours they put in, the less confident they become with their belief of gaining mastery.
That complacency sends them into a downward spiral. They become demotivated to do any more training, so they stop training, and then they stop getting better. Then they’ll probably tell themselves, “I wasn’t cut out to do this.”
It’s deeply ironic to think that they stop becoming more skilled in an area because they gained enough skill in that area to comprehend what it takes to succeed.
There are a lot of different psychological effects that exhibit themselves in different flavours from this transitional cause. Imposter syndrome is one being thrown around by a lot of people starting their first jobs at the moment. The Dunning-Kruger effect is another. But what can we do about it?
Obviously, you should talk to an expert, not some engineer writing a blog who has a casual interest in psychology. But if you were to ask some engineer writing a blog who has a casual interest in psychology, I would say a good strategy is the combination of two things: 1. being aware of how far you’ve already come and celebrating that, 2. a bit of self-aware self-delusion. Hear me out.
Awareness of how far you’ve come
You’ve got to stage two of four along the competency track. You realise now just how much better the experts are than you. But at least you realise that now.
Being aware of your awareness is something to celebrate. You can see what areas you need to develop more clearly and you understand the issues better. You’ve come a long way in being able to clearly see the remaining barriers that still stand in your way: before you were blind to them, now you can see them. The first stage in solving a problem is to understand that you have a problem, so this is a great first step. It’s always good to recognise this and congratulate yourself.
I also wonder how many happiness strategies are frowned upon because they’re not virtuous: i.e. list ways that you’re better than someone. You might not be as good as Federer in tennis, but at least you can beat your neighbour. You’ve come far enough to beat him (even if he’s 87 and plays with a walking stick in one hand and a racket in the other). It’s probably not good to consistently look for people who you’re better than in a certain skill, but maybe it’s more healthy than consistently comparing yourself to the greats/how far you still have to go. I’d say: do it, but only infrequently… and keep it to yourself.
Okay, so I made this sound a little bit dramatic. Really, though, this strategy is just visualisation: visualising the result you want and imagining what it would be like to have already achieved it. I haven’t read the book The Secret, but I’ve heard that this idea is what the book is about. From what I’ve heard about the book, it sounds like it takes this strategy too far by saying, “use this strategy and all your dreams come true”: ignoring competency and other problems which would require different strategies to apply in parallel, which puts me off the book. Visualisation is one thing we need to do in conjunction with a number of other useful strategies. It’s not a get-rich-quick scheme. But I digress.
There will be times when you’re not that good at something: when you try something new. At the moment, I’m in the middle of writing a short story. My writing is far from that of Hemingway. What helps me is imagining my name next to the likes of J.K. Rowling. I try not to lose sight of what I want to do: to have a short story that people will actually want to read as much as Harry Potter. I can see all the areas where my story is bad: poor pacing, I get too science-y/technical in a lot of places (= boring for 90% of people), sentence construction is just weak etc. But I try to ignore all that for the majority of the time, do my best at just tweaking the most immediate issue, then imagine that the rest is gold. Or at least it can be. Because the prospect of how far I still have yet to go – for me – is paralysing.
Being aware of the 4 stages of competence allows us to articulate our complacency: it gives us a model to understand what’s going on and why we feel paralysed. Finally, we can take one step further back. By realising that I’m aware of the 4 stages… I can choose what to do about it. And then if I choose to actively ignore it and pretend that I’m a great writer, I can live in my happy delusional bubble to help ride out the bumpy, demotivating transition from conscious incompetence to conscious competence.
These are all just my own strategies for trying to stay motivated. But I know there are others! What do you guys do? & What do you think of mine?
With awareness of mental illness on the rise these days, I think that anxiety is getting a bad rep. Hear me out.
Obviously, being overly-anxious and perceiving everything as scary and negative is bad. It means that every event is riddled with potential bad outcomes. It makes us think that everyone else hates us, they’re all talking behind our backs. This mentality restricts our ability to see the world reasonably. We make illogical decisions based on an overly-negative worldview. Plus, it’s not very fun to live in that mental environment.
However, sometimes people actually don’t like us in real life. Sometimes actions do carry risks with them. And anxiety makes us aware of these negative points.
Confidence, on the other hand, has the opposite effect. With confidence, we believe that we’re likeable and that events that occur to us will be, on the whole, positive. While this is a lovely state of mind to be in, it doesn’t necessarily accurately represent how events actually occur throughout our lifetime. And if we’re over-confident, we become completely blind to any negative aspects of the world that we might need to be aware of to make good decisions. We’ll be totally taken in with our certain belief that things will go well in our life. As Voltaire says, “doubt is not a pleasant condition, but certainty is absurd.”
We need to be adequately aware of the risks and adequately aware of the gains to make decisions. We need aspects of anxiety to bring awareness of the negative parts of life. In the same way, we need aspects of confidence to bring awareness of the positive parts of our life… as well as to inspire confidence about ourselves in others and general mental well-being.
Obviously, anxiety is bad. But I fear that with the demonisation of anxiety in today’s culture, we might fail to recognise its uses as a mental state. Our ancestors evolved with anxiety so that we didn’t confidently walk up to pet the cuddly-looking lion.
I guess the trick is, like so many things, to find a good balance. Too much confidence or anxiety is bad. Oscillating between over-confidence and anxiety is bad (and you’ll probably be diagnosed with bipolar disorder). But get the balance just right, and you can hold on to positive beliefs while still being aware of negative possibilities.
I had a harrowing decision to make the other day. I recently cancelled my Audible subscription, and I had one final token to buy any book with. But which book to buy? I wanted to get the best bang for my buck – this final one was going to have to last me a while. So naturally, I gravitated towards The Complete Works Of Sherlock Holmes… which was 77 hours long. It’s also narrated by Steven Fry, which is always good.
After listening to A Study In Scarlet, the first book in the series, there were two aspects of Arthur Conan Doyle’s book that drew my attention. Both of them were grounded in the idea of knowledge, the perception of knowledge, skills, and the perception of skills. I.e. what we know, what we can do, and how we perceive these possessions and processes. Part of the magic of Holmes is how he describes what it’s like to see the world through his eyes. But what do his eyes see? If we split this up into skills and knowledge, we can compare how Sherlock – and by proxy, Doyle, perceived the world, and compare this to contemporary psychology.
“It was easier to know it than to explain why I knew it. If you were asked to prove that two and two made four, you might find some difficulty, and yet you are quite sure of the fact. Even across the street I could see a great blue anchor tattooed on the back of the fellow’s hand. That smacked of the sea. He had a military carriage, however, and regulation 12 side whiskers. There we have the marine. He was a man with some amount of self-importance and a certain air of command. You must have observed the way in which he held his head and swung his cane. A steady, respectable, middle-aged man, too, on the face of him—all facts which led me to believe that he had been a sergeant.”
“Wonderful!” I ejaculated.
“Commonplace,” said Holmes, though I thought from his expression that he was pleased at my evident surprise and admiration.
What’s interesting about this is that Holmes describes pretty accurately what happens when you’ve reached the fourth stage of skill competency, based on the model of the four stages of competence, which is “unconscious competence”. An example of unconscious competence is the skill of putting our clothes on in the morning – how often do we get up and start putting on clothes, thinking about the day ahead, only to look down and found we’ve now completed the action 30 seconds later. We’re so used to putting our clothes on in the morning that we do it completely on autopilot, and at this point, it would be slower to explain how we’re doing it than to actually do it.
What makes Holme’s remark even more interesting, though, is that Doyle wrote A Study In Scarlet in 1887. The psychological model of “the Four Stages Of Competence” was developed in 1970. Or at least, it was popularised and codified as a model in 1970. It clearly existed as a concept in Doyle’s head long before it was elucidated as a psychological theory.
So that’s how Holmes views skills… how does he perceive knowledge?
“You see,” he explained, “I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things so that he has a difficulty in laying his hands upon it. Now the skilful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can
distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.”
The character of Sherlock becomes even more interesting when analysing how he perceives the acquisition of knowledge. What’s interesting is that he acknowledges that there are skills we have mastered to the point of unconscious competence, yet he doesn’t seem to acknowledge the equivalent in knowledge: the unknown known. The unknown known encompasses the things we know that we don’t realise we know: common sense falls under this category. Well, for some people. Common sense also falls under the category of ‘unknown unknown’ for others… moving on…
The thing is, a lot of that knowledge we need to traverse life only ever gets learnt tacitly, to be organised away somewhere in our brains as unrealised knowledge. We need unknown knowns to function and solve problems, and it’s important to understand that a lot of the knowledge we use to solve problems were never explicitly intended to be learnt in the first place. If we could travel into the future and know every aspect of the case, we could then go back in time to think about what we need to learn in order to solve it. But life isn’t like that: it throws you a problem, and if you’ve equipped yourself with the knowledge and skills prior to that point, you’re good, if not, you better learn fast. There might be a case for Sherlock in the future that hinges on the notion of a heliocentric solar system. Sure, it’d be convoluted, but it could still happen and because of that, Holmes isn’t correct in disregarding this new information.
There is one aspect of what Holmes says that I agree on, though. Now that he’s learnt of a different model of the Solar System, he’s spent his time listening to it so he might as well commit it to memory. But when planning what to learn, it’s important to be very careful what will be the most valuable skill/knowledge is for our situation in life, and then learn that first. If we’re not careful, we could spend an entire lifetime acquiring useless information and skills, where we have no chance to actually apply those skills in the environment that we are situated in. Take Biology class in secondary school, for example. How are we applying that knowledge? But hey, at least when the government ask me how it is that I still don’t know tax laws by the age of 27 after botching my Deliveroo self-assessment, I can tell them, “the mitochondria is the powerhouse of the cell”.
And I won’t even get into the other aspect of what Holmes said, regarding how we have finite space in our minds with which to hold information…
“The world is a dangerous place. Not because of those who do evil but because of those who look on & do nothing” – Mr. Robot S01E02 (even though this is a pretty common mentality echoed by thousands of others)
It was at this moment, watching Mr Robot, that I really thought about the consequences of those words. I’ve always agreed with the sentiment to a certain extent. When we see evil and allow it to happen, we implicitly condone it. We’re evil through association, based on our inaction against it. But what would happen if everyone took action based on their beliefs?
And what would happen if not everyone could agree on what to believe: what the best plan of action was for each situation?
We would have a scenario where whatever one side would do, the other side would tear down. If everyone had this mentality, we would experience the ultimate destruction of any progress that humanity tried to make. Especially as most of our high-level accumulated views are so divided. The Brexit vote: 48 – 52 %. Trump/Clinton presidential candidacy: about the same.
If we all held this view, humanity would tear itself apart. Because what we believe as a course of action may differ based on our individual values and every individual’s specific knowledge. The right course of action – the judgment of good and bad, evil and righteous – becomes subjective.
After all is said and done, is it better to allow what our subjective idea of evil is to exist, just so that we can facilitate human progress – whether we’re going in the right direction or not?
At first glance, the two statements might be thought to be the same. If you’re a winner, you’re not a loser. However the two statements actually ask something different. ‘Not being a loser’ are actions that aim to mitigate all negative aspects in your life. ‘Being a winner’ are actions that add value to your life.
Don’t Be A Loser
How many times have you gotten into an argument with someone, only for both parties to only ultimately be angry, feel resentment, yet have achieved no further value from the argument? Imagine an angry couple asking for a divorce, each side spending all of their money on lawyers in hopes of “being victorious” over the other side. These wars are only destructive. At the end of them, you’ll only ever have less than you started with.
Perhaps something more relatable – how about those times we’ve mindlessly flicked through Facebook or Instagram for a few hours. What have we gained? Nothing. What have we lost? Time – arguably our most valuable asset. These mindless time-stealers are destructive in the most insidious way: we downplay our time’s value. “There will be more time tomorrow”. Yet – unlike money, which we can always acquire more of – we will only have a finite amount of time, no matter how hard we try to reverse it.
These different ways of destruction and loss are paths we choose (or fail to see that we choose) that go towards ‘being a loser’. In learning how to avoid these negative actions in our lives – in learning how not to be a loser – by cultivating good habits and being mindful of our actions, we can mitigate these negative actions. However, after mitigation of the negative, we have only stopped ourselves from being in a deficit. We are only still neutral. We may have kept our time and our money, but we need to know how to use it in a way that adds value.
Be A Winner
‘Being a winner’ contains all the obvious actions that we associate with doing well in life. It’s getting the girl. It’s scoring the winning goal in the last minute. It’s becoming a genius millionaire. It’s putting thousands of hours into a discipline to become an expert in that field (or hours into learning social skills to get the girl, or hours of honing our sporting ability every tiring evening to score the goal). These are the things that people focus on, a lot of the time. These are the things that take us from neutral to ‘winning’.
The Monty Hall Problem
The idea of “being a winner” and “not being a loser” can even be applied to Mathematics and Probability. Perhaps it would be more accurate to say that, rather, it can be applied to our perceptions of probability so that we don’t fall into any logical traps; so that we can see the problem from one more perspective: allowing us to see outcomes more clearly.
“Let’s Make A Deal” was a game-show that was popular in the 1960’s in America. Contestants can go on this game-show and go home with the car of their dreams. It was hosted by a famous guy called Monty Hall. Contestants would come onto the show and be presented, by Monty, three doors. Behind one of the doors lay your dream car. Behind the other two doors lay ‘zonks’: objects you didn’t really want.
So, Monty gave you a choice of the three doors. Let’s say you chose door number 1. You have a 1/3 chance of having picked the correct door. Monty knows which doors contains the ‘zonks’. Now, Monty goes up to one of the other two doors and opens one of them, to reveal a zonk. He turns to you and says:
“Do you like door number one? Or do you want to switch? There’s only one other door to choose from now.”
What would you do? You had a 1/3 chance to pick the correct door. You picked. A door was effectively removed. Now you have a 1/2 chance between the two remaining doors, right?
Do you like that door you picked? Is that door a winner?
For years, contestants agonized over this question. There was speculation about what was the correct strategy to use, but there was never a definitive, well known answer during the running of the show.
Well, how about we see the door from the other perspective. Do you dislike that door?
To answer the question from this new perspective, let’s start over. You’ve got a 2/3 chance of picking the incorrect door. That means that you have a 2/3 chance of picking a door with a zonk. Monty knows which doors contain the zonks. So if you pick a zonk, he is forced to open the other door which contains the remaining zonk. The remaining door that hasn’t been picked by you nor opened by Monty, therefore, has a 2/3 chance of containing the car. How much do you dislike your door now?
Clearly, the best option can now be seen to switch. This statistics problem was popularised during the 90’s when a reader’s letter quoted in Marilyn vos Savant’s “Ask Marilyn” column in Parade magazine. Amazingly, many readers of vos Savant’s column refused to believe switching is beneficial despite her explanation. After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them claiming vos Savant was wrong. Even when given explanations, simulations, and formal mathematical proofs, many people still do not accept that switching is the best strategy. Paul Erdős, one of the most prolific mathematicians in history – and the Mathemetician who holds the record for the largest amount of published mathematical papers (counting in at 1500) – remained unconvinced until he was shown a computer simulation demonstrating the predicted result.
I was reminded yesterday of a friend who, a few years ago, who didn’t pass her grades. She was distraught.
“I’m a failure!” she cried. I’ve always had an adversity to calling someone a failure. It’s so absolute. Just as when some annoys you, you shouldn’t say, “you’re annoying,” you might say, “you’re annoying me right now”. The first is absolute, The second is temporary. You don’t get annoyed by their very presence (well, I hope no-one has that effect on you). The annoyance comes and goes.
“You’re not a failure, you’ve just failed this time around,” I replied. I was told that I was an optimist (dubious). I didn’t say anything back, but I wish that I had replied that, rather, I was just a rationalist.
We are born without any knowledge bestowed to us – other than the unconscious neural patterns and cognitive biases that allow us to survive as a baby (breathing, shitting, forming attachments with our mothers): which could arguably be said to processes rather than knowledge. As we get older, we accumulate more and more knowledge. After a time, we start testing our knowledge. Sometimes we pass, sometimes we fail. But should we blame ourselves for the knowledge we haven’t yet come across yet and acquired?
Yes, sometimes all of the knowledge is laid out in front of us, and the reason for why we fail is simply that the rate of knowledge acquisition wasn’t good enough. But even then, the skills of acquiring knowledge need to be taught. We need to learn how to learn. And some of us may learn in different preferential ways. Can we be blamed for not coming across a method that suits our style of learning?
I believe that all we can do is to try. Focus our attention at different levels: work at learning how to learn, then on learning. This is a balancing act; there’s no point planning our revision strategies if, by the time we’ve finished planning, we’ve given ourselves no time left to actually revise. Yet still, all we can do is try. When we fail, we highlight our absence of knowledge in certain areas. Failure is a useful learning experience that will help us succeed in the future. With each failure, our chances of being a winner in the future increase, as long as we learn from the failure.
Ultimately, if you don’t buy a ticket, you can’t win. The only time we are ever an absolute failure is when we give up. Because every individual instance of when we fail is an opportunity to glean one more hint into how to succeed next time.
The follow post is my argument on why we can’t know how long it will take to develop AGI. It is based on various assumptions so I’d be interested to read any feedback you have regarding it!
Explanation of Terms: AGI & ANI
There are three broad forms of artificial intelligence: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial super intelligence (ASI). The subject of this post is relating to AGI, however ANI is also needed to explain where we are, so I will briefly explain both… by copying and pasting the explanation used on WaitButWhy:
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
The Explosive Growth
Knowledge – and the access to that knowledge – is a key component for any AGI, and the internet is used for precisely that reason. The internet has completely changed the landscape for how humans interact with their daily environment compared to twenty years ago. No-one imagined culture would change so much twenty years ago due to our newly found connectivity. No doubt, it will be fundamental in how AGIs will interact with their environment in the future.
We’ve also made an ANI that can drive cars, that can beat humans at Go, that can exhibit creativity in a specific field by creating music or art, an ANI that can learn – through neural networks – how to play a game of Mario:
There is even a computer that has passed the Turing test (well, passed it 33% of the time): Eugene Goostman. And tech companies like Google are getting into serious bidding wars for experts in the field of AI. That must mean we’re close, right?
The Naive Optimism
In the 1950’s, thanks to scientific advances from Turing and von Neuman et al, the scientific community started looking more and more into the prospect of AI, and the field of AI was born.
There was huge optimism with regards to the rate of growth within AI development. In 1958, H. A. Simon and Allen Newell stated, in : “within ten years a digital computer will be the world’s chess champion” and “within ten years a digital computer will discover and prove an important new mathematical theorem.” We know, with hindsight, that the former prediction took four times longer than expected – with Deep Blue beating Kasparov in 1997 rather than 68 – and the latter prediction is still yet to be realised.
Then, in even more optimism, the following claims that have yet to succeed were made:
In 1965, H. A. Simon stated: “machines will be capable, within twenty years, of doing any work a man can do.”
In 1967, Marvin Minsky said: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
And in 1970, Marvin Minsky (in Life Magazine), claimed: “In from three to eight years we will have a machine with the general intelligence of an average human being.”
Why were these great scientists so off their mark? Can we predict just how long it will take us to achieve AGI?
“When eating an elephant take one bite at a time.” – Creighton Abrams
A standard technique when faced with large tasks is to break it down into smaller chunks. Many British readers will be reminded of this strategy when BBC Bitesize helped us use it to study for our GCSE. This technique has become so ubiquitous that it shapes the way we view tasks. When we see a large task, many of us unconsciously break it up into smaller, more manageable chunks. Applying this lens to knowledge is constantly being reinforced, as examples of its application are everywhere: the categorisation of subjects, science, the hierarchical nature of companies are even founded on this idea. Large tasks are split into smaller chunks, where the small chunks of work are categorised in nature and then given to experts in those related field, such as legal tasks to lawyers, tasks relating to the understanding and creation of complex objects to engineers, the tasks of selling objects/services to the sales team, the task of keeping people & work loads on track, prioritized and organised is given to managers.
So how does this all affect our perception of how close we are to AGI? Well, naturally, scientists will break the task down into easier chunks. In their minds, these chunks might look something like this:
1. Create an artificial neuron (= the transistor). Check (But not really. We’ll re-explore this later)
2. Connect millions of neurons together so that it forms something similar to a neural network (= a modern day computer, which contains more transistors than a fruit fly contains neurons). Check.
3. Write the artificial equivalent of the “software of nature”. This field is growing today and is called Machine Learning. No check yet.
So we’re pretty close, right? We’ve got 2 out of 3 tasks complete. Well, no, not really. At this stage, many people could be – and probably generally are – assuming that each task is roughly the same size with regards to complexity, and thus the time needed to complete the task is the same. But we just don’t know. The problem is even bigger than that: we don’t know what we don’t know to figure out the rest of the puzzle. Let me explain…
“Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.” – Donald Rumsfeld
What on earth is Rumsfeld talking about? Well, actually, he’s talking about something very valid. Psychologists would call it metacognition: our ability to perceive and make judgments on our knowledge and skills. There are many models that have been created from metacognition, such as the 4 Stages Of Competency and the Dunning-Kruger Effect. If we take the phrase ‘unknown unknowns’, the first ‘unknown’ would relate to our perception of our knowledge, and the second ‘unknown’ would relate to whether we hold that specific packet of knowledge.
There are a few attributes that are associated with each part, however the main one of interest is that unknown unknowns, due to their very nature, cannot be measured in terms of the size of the knowledge. I think we’re starting to see the problem.
Two Paths To Innovation
There are two main paths to take when creating something new.
1. The joining of two already existing sub-components – which had never been joined before – to create something new
The wheel has been around for a while. The suitcase has also been around for a while. But a wheely suitcase? It took us humans until the 1970’s – dragging heavy suitcases around for every holiday – until one clever inventor finally came up with the novel idea of putting the wheel and the suitcase together.
In this situation, all we need to do is get the expert of each component part in to talk about how the relationship between the new sub-components will work. If it’s a simple amalgamation, maybe not even that. For example, if we’d like to build a wheely suitcase, we might need a designer – to make it looks aesthetically pleasing – and an engineer – to make it structurally and physically sound (and to add the wheels). Maybe also a separate production engineer to find the correct tools & develop a production line for the creation of the product. All of the knowledge required to do this already exists, which makes it easier to predict how long it might take to develop this product. We just need to find people with the knowledge already in their brains and add it together in the novel way.
2. Research and development: applying new science to create new technology
This is the trickier route to new products. But where the barrier to entry is greater, the incentive may also be larger: in monetary terms, but also in prestige and personal sense of achievement. This is also the form of invention that AGI takes when it tries to replicate the brain.
Replicating the brain to create artificial AGI requires a two-pronged strategy: the first is in further research into the mechanisms of the brain, so we can better understand how it works. This will create a blueprint of how to create the artificial brain. The second part of the strategy is to further the development of artificial brain parts (computer chips) so we have better and better devices to emulate a brain with. The two paths start at opposite ends of the room, but as each side develops, they get closer and closer together… until finally at some time in the future, they’ll meet: the point at which we have the knowledge and skills to create a AGI by replicating an artificial brain. However, the distance left to cover will remain unknown until it has been covered. Like seeing light at the end of the tunnel: we can see the end, but no matter how far we travel, the light seems just as far away as it was previously. Then, before we realise it, we can make out shapes beyond the end of the tunnel, we keep walking and we suddenly find ourselves in the outside world with the wind on our faces.
The Hindsight Bias
Just a quick one before moving on, it might be worthwhile to touch upon the Hindsight Bias. This is usually applied to History: where in the past, it seems very obvious which factors contributed to the events that unfolded in History. It seems clear, in hindsight, that certain financial instruments and sub-prime mortgages contributed massively to the financial crash of 2008. Was an imminent crash obvious at the time? No, of course not. To look back in History is to remove all the noise, and to see just pure cause and effect.
In the same way that the hindsight bias applies to history, ‘development bias’ (just made this one up) applies to the production of a new product. It might be clear how much complexity is involved in steps 1 and 2 of the AGI task list, in hindsight, but it is still very unclear regarding the path ahead for step 3. It is easy to see, in hindsight, how to create an object when the path has been set out. During product development, however, there are a multitude of attractive looking paths for creation, yet we never know which one can be successful until we walked that path.
Create an artificial neuron (= the transistor). Uncheck.
We humans work by analogy. An electric circuit is a water pump: the water represents electrons being pumps around the circuit. The mind’s neurons are a logic gate: they can either be on or off.
All analogies and models have thresholds at which they break down, however. Analogies of the brain are arguably getting stronger: from clay and water during Greek ages, to being more like a mechanical machines during the industrial revolution, to being like a computer chip in our current era. However we still underestimate the complexity of the brain by over-simplifying to an analogy we can understand, and this has tricked us into believing we can seamlessly just replace a neuron with a transistor. A neuron can be turned ‘on’ from being given an electrical pulse greater than a certain voltage, however this ‘on’ state is temporary. A neuron has no way of storing information in an on-off state as the transistor does: as a result, both components use completely different mechanisms to operate. A computer, as we know them at the moment, will never be able to operate like a brain. This means that we only have 1 of the 3 boxes checked, and we’ll have to traverse the lands of unknown unknowns in the search of our artificial neurons.
The Application of Unknown Unknowns to the AGI Task Hierarchy
“Fail fast, fail forward” – Silicon Valley Mantra
Tasks that have never been undertaken before are unknown unknowns. We known roughly how long it would take to undertake tasks that others have achieved before, because we can gauge how long others took to do them. However with the unknown unknowns, all we can do is to fail forward: to work in a way that moves us from not perceiving what we don’t know in order to achieve the task, to perceiving what we don’t know. To be cartographers of our own map of knowledge: sketching out in ever more detail the areas we’ve acquired and the areas we haven’t acquired. The areas yet to be discovered are dark and potentially limitless in size.
This brings us to task no. 3 for creating AGI. We just don’t know how long it will take us to replicate a human brain, due to what we have yet to achieve. We still only understand a tiny fraction of the complexity of the human brain. As well as this, machine learning is still in it’s relative infancy. We might argue that based on our current understanding of quantum mechanics, and our ability to manipulate at the nanoscale, we’re fairly close to understanding the brain, but that’s assuming that we already have the broad strokes of what is involved filled in. There might be a whole scientific discipline that we have yet to master before we understand the inner mechanics of the human brain, and we don’t even realise we don’t know it yet.
It seemed like a small step, back in 1960, from transistors to AGI. With hindsight, we can clearly see that there was still so much to learn that the scientists of the time hadn’t even perceived yet. Looking back, we can see the path needed to walk for the development of AGI has been far longer than previously anticipated. It might be tempting to think again that the step from where we are to AGI must be small: our success is almost palpable. But in reality we just don’t know how far we still have yet to go to make an AGI perform in the same way as the human brain.
Yet just because our brain performs as an AGI, this doesn’t mean all AGIs need to behave like a brain. To induce – from seeing a swan that is white – that all swans are white, is a fallacy. Are there other ways to create an AGI?
More Than One Way To Skin A Cat
We can use task break-down again, but this time to break down the steps taken by an AGI to perform tasks. If we break each task that an AGI (a human) performs down enough, we will start to see that even the most complex tasks are just the combination of many simple processes. We can break down the task into many different sub-tasks that an individual ANI can perform.
This bottom-up ANI amalgamation is the strategy Google employs in hopes of achieving an AGI. A single ANI is fairly limited by itself in scope, but it can perform a specific task. Maybe it can defeat the best chess players, or maybe it can just tell you what time it is. Hook this ANI up to the internet, though, and it can be used when needed. If a second layer is placed over the top of all these ANIs – a layer which is able to assess the task, then access and apply the appropriate ANIs to complete this task – we have just made our first AGI.
With this form of AGI, we may be able to class invention as the joining of many objects that already exist – ANIs – to form something new: the AGI. The top layer part of the AGI which judges the task and accesses each ANI would still need to be developed, but even this would be more like an adaption of already existing technology. A prediction may not be able to be made with completely accuracy: there are still areas outside the view of the analyst. Yet if the amount of ANIs necessary to form a functioning AGI can be counted, and the speed at which the ANIs which don’t exist can be created can be estimated, a prediction could be made. Perhaps the next question is: is it possible to calculate how many ANIs are needed until an AGI is created?
To Be Continued…?
So maybe the first AGI won’t pop into existence. It will crawl slowly into view, assimilating more and more ANIs until it is able to do any task that a human can do. No doubt, in time, it will continue to grow different ANI limbs in all directions, until it has completely surpassed humanity.
I’m all up for “following your heart” sometimes. But sometimes… we just shouldn’t. There have been a few instances recently were I’ve relied on my intuition – my feelings – to make decisions and after weighing up these decisions in hindsight, I’ve realised they weren’t the best option to choose. I’d like to share these instances with you to highlight how easy it is to make flawed decisions.
1. I went to a conference recently to learn about Vacuum gauges, vacuum pumps, how to create a vacuum etc. The conference was given by one of the experts in the industry (who was high in the hierarchy of a vacuum company) so I thought it would be worth-while listening to the conference. I brushed up on my knowledge with a pdf I found on the internet by the company this guy worked for before the talk. It was pretty comprehensive – 134 pages (I didn’t read all of it), but it perfectly explained everything I needed to know. So I read this pdf for a few hours and then set off to the conference. When I sat down and the expert started talking, I realised he was using slides almost directly taken out of the pdf. However, this information now was coming from an expert: he would still fill in a few interesting bits here and there, right? And I didn’t know it inside out. At the end of the day, I felt like I had got a lot done: I had gone into Glasgow Uni, made it to a vacuum conference to listen to an expert talk. Productive day!
In hindsight, though, it took about an hour to get from my room to into the conference hall (so 2 hours travelling there and back). Commuting somewhere might feel like getting things done – “I’m moving forward!” – but actually it’s sunk cost that you accept because the value of the thing you’re travelling to is still greater than the cost. Parking cost £4.40. And I listened to this lecture for about 1.5 hours. So that’s 3 hours of time for 1.5 hours of learning. Compare this to the amount of time I could have spent learning the information at home – 3 hours for 3 hours, it starts looking like I should have stayed at home. The weird thing is, I don’t feel like I get a lot of stuff done if I’ve just sat at my computer all day. It doesn’t have the gravitas of telling yourself, “I went to a conference to listen to an expert.” The problem is that I’ve attached some inherent value to “listening to an expert,” where really there is none. The value is in the content that the expert can get into our heads. I somehow feel that because this guy is an expert, he can teach me more content than just a pdf. And the conference makes it seem important. I’m an important person with all these other important businessmen, gleaning some knowledge from an expert. You can’t feel that important sitting at your computer at home, reading a pdf.
2. My route home can take me either two ways: a longer route where I can move more quickly, or a shorter route that goes through a road with huge speed-bumps that I need to take at around 5 mph. I always go the fast route: it just feels like I’m getting home quicker: I’m going faster, after all!
I recently found out, though, that the speed-bump route is quicker. Even with this new information, I still sometimes don’t believe it and go the route where I can travel quicker – it just feels quicker.
I’ve even heard of some people, waiting for a late bus, who actually walk to the next bus station so that they can get the bus further down the line. So that that they can catch the same bus just from the next station… only the walk to the next bus station makes them feel like they’ll get to their destination quicker.
Our feelings/intuitions can be sneaky. And maybe the most sneaky thing about them is that because they’re decisions made on feelings, we don’t consciously realise we’ve made them. We don’t rationally study the decision making, because it’s a case that’s been opened and closed within the intuitive realm. It’s only when we either catch ourselves making them, or look back in hindsight, that we can see.