Often the ideas in our heads start off as unfocused images. As we continue to study ours thoughts, things come into view. The ideas start getting crystallized out from the melt. The nebulous starts becoming solid. This blog contains those thoughts as they develop and become planets in our minds.
I was reminded yesterday of a friend who, a few years ago, who didn’t pass her grades. She was distraught.
“I’m a failure!” she cried. I’ve always had an adversity to calling someone a failure. It’s so absolute. Just as when some annoys you, you shouldn’t say, “you’re annoying,” you might say, “you’re annoying me right now”. The first is absolute, The second is temporary. You don’t get annoyed by their very presence (well, I hope no-one has that effect on you). The annoyance comes and goes.
“You’re not a failure, you’ve just failed this time around,” I replied. I was told that I was an optimist (dubious). I didn’t say anything back, but I wish that I had replied that, rather, I was just a rationalist.
We are born without any knowledge bestowed to us – other than the unconscious neural patterns and cognitive biases that allow us to survive as a baby (breathing, shitting, forming attachments with our mothers): which could arguably be said to processes rather than knowledge. As we get older, we accumulate more and more knowledge. After a time, we start testing our knowledge. Sometimes we pass, sometimes we fail. But should we blame ourselves for the knowledge we haven’t yet come across yet and acquired?
Yes, sometimes all of the knowledge is laid out in front of us, and the reason for why we fail is simply that the rate of knowledge acquisition wasn’t good enough. But even then, the skills of acquiring knowledge need to be taught. We need to learn how to learn. And some of us may learn in different preferential ways. Can we be blamed for not coming across a method that suits our style of learning?
I believe that all we can do is to try. Focus our attention at different levels: work at learning how to learn, then on learning. This is a balancing act; there’s no point planning our revision strategies if, by the time we’ve finished planning, we’ve given ourselves no time left to actually revise. Yet still, all we can do is try. When we fail, we highlight our absence of knowledge in certain areas. Failure is a useful learning experience that will help us succeed in the future. With each failure, our chances of being a winner in the future increase, as long as we learn from the failure.
Ultimately, if you don’t buy a ticket, you can’t win. The only time we are ever an absolute failure is when we give up. Because every individual instance of when we fail is an opportunity to glean one more hint into how to succeed next time.
The follow post is my argument on why we can’t know how long it will take to develop AGI. It is based on various assumptions so I’d be interested to read any feedback you have regarding it!
Explanation of Terms: AGI & ANI
There are three broad forms of artificial intelligence: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial super intelligence (ASI). The subject of this post is relating to AGI, however ANI is also needed to explain where we are, so I will briefly explain both… by copying and pasting the explanation used on WaitButWhy:
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
The Explosive Growth
Knowledge – and the access to that knowledge – is a key component for any AGI, and the internet is used for precisely that reason. The internet has completely changed the landscape for how humans interact with their daily environment compared to twenty years ago. No-one imagined culture would change so much twenty years ago due to our newly found connectivity. No doubt, it will be fundamental in how AGIs will interact with their environment in the future.
We’ve also made an ANI that can drive cars, that can beat humans at Go, that can exhibit creativity in a specific field by creating music or art, an ANI that can learn – through neural networks – how to play a game of Mario:
There is even a computer that has passed the Turing test (well, passed it 33% of the time): Eugene Goostman. And tech companies like Google are getting into serious bidding wars for experts in the field of AI. That must mean we’re close, right?
The Naive Optimism
In the 1950’s, thanks to scientific advances from Turing and von Neuman et al, the scientific community started looking more and more into the prospect of AI, and the field of AI was born.
There was huge optimism with regards to the rate of growth within AI development. In 1958, H. A. Simon and Allen Newell stated, in : “within ten years a digital computer will be the world’s chess champion” and “within ten years a digital computer will discover and prove an important new mathematical theorem.” We know, with hindsight, that the former prediction took four times longer than expected – with Deep Blue beating Kasparov in 1997 rather than 68 – and the latter prediction is still yet to be realised.
Then, in even more optimism, the following claims that have yet to succeed were made:
In 1965, H. A. Simon stated: “machines will be capable, within twenty years, of doing any work a man can do.”
In 1967, Marvin Minsky said: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
And in 1970, Marvin Minsky (in Life Magazine), claimed: “In from three to eight years we will have a machine with the general intelligence of an average human being.”
Why were these great scientists so off their mark? Can we predict just how long it will take us to achieve AGI?
“When eating an elephant take one bite at a time.” – Creighton Abrams
A standard technique when faced with large tasks is to break it down into smaller chunks. Many British readers will be reminded of this strategy when BBC Bitesize helped us use it to study for our GCSE. This technique has become so ubiquitous that it shapes the way we view tasks. When we see a large task, many of us unconsciously break it up into smaller, more manageable chunks. Applying this lens to knowledge is constantly being reinforced, as examples of its application are everywhere: the categorisation of subjects, science, the hierarchical nature of companies are even founded on this idea. Large tasks are split into smaller chunks, where the small chunks of work are categorised in nature and then given to experts in those related field, such as legal tasks to lawyers, tasks relating to the understanding and creation of complex objects to engineers, the tasks of selling objects/services to the sales team, the task of keeping people & work loads on track, prioritized and organised is given to managers.
So how does this all affect our perception of how close we are to AGI? Well, naturally, scientists will break the task down into easier chunks. In their minds, these chunks might look something like this:
1. Create an artificial neuron (= the transistor). Check (But not really. We’ll re-explore this later)
2. Connect millions of neurons together so that it forms something similar to a neural network (= a modern day computer, which contains more transistors than a fruit fly contains neurons). Check.
3. Write the artificial equivalent of the “software of nature”. This field is growing today and is called Machine Learning. No check yet.
So we’re pretty close, right? We’ve got 2 out of 3 tasks complete. Well, no, not really. At this stage, many people could be – and probably generally are – assuming that each task is roughly the same size with regards to complexity, and thus the time needed to complete the task is the same. But we just don’t know. The problem is even bigger than that: we don’t know what we don’t know to figure out the rest of the puzzle. Let me explain…
“Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.” – Donald Rumsfeld
What on earth is Rumsfeld talking about? Well, actually, he’s talking about something very valid. Psychologists would call it metacognition: our ability to perceive and make judgments on our knowledge and skills. There are many models that have been created from metacognition, such as the 4 Stages Of Competency and the Dunning-Kruger Effect. If we take the phrase ‘unknown unknowns’, the first ‘unknown’ would relate to our perception of our knowledge, and the second ‘unknown’ would relate to whether we hold that specific packet of knowledge.
There are a few attributes that are associated with each part, however the main one of interest is that unknown unknowns, due to their very nature, cannot be measured in terms of the size of the knowledge. I think we’re starting to see the problem.
Two Paths To Innovation
There are two main paths to take when creating something new.
1. The joining of two already existing sub-components – which had never been joined before – to create something new
The wheel has been around for a while. The suitcase has also been around for a while. But a wheely suitcase? It took us humans until the 1970’s – dragging heavy suitcases around for every holiday – until one clever inventor finally came up with the novel idea of putting the wheel and the suitcase together.
In this situation, all we need to do is get the expert of each component part in to talk about how the relationship between the new sub-components will work. If it’s a simple amalgamation, maybe not even that. For example, if we’d like to build a wheely suitcase, we might need a designer – to make it looks aesthetically pleasing – and an engineer – to make it structurally and physically sound (and to add the wheels). Maybe also a separate production engineer to find the correct tools & develop a production line for the creation of the product. All of the knowledge required to do this already exists, which makes it easier to predict how long it might take to develop this product. We just need to find people with the knowledge already in their brains and add it together in the novel way.
2. Research and development: applying new science to create new technology
This is the trickier route to new products. But where the barrier to entry is greater, the incentive may also be larger: in monetary terms, but also in prestige and personal sense of achievement. This is also the form of invention that AGI takes when it tries to replicate the brain.
Replicating the brain to create artificial AGI requires a two-pronged strategy: the first is in further research into the mechanisms of the brain, so we can better understand how it works. This will create a blueprint of how to create the artificial brain. The second part of the strategy is to further the development of artificial brain parts (computer chips) so we have better and better devices to emulate a brain with. The two paths start at opposite ends of the room, but as each side develops, they get closer and closer together… until finally at some time in the future, they’ll meet: the point at which we have the knowledge and skills to create a AGI by replicating an artificial brain. However, the distance left to cover will remain unknown until it has been covered. Like seeing light at the end of the tunnel: we can see the end, but no matter how far we travel, the light seems just as far away as it was previously. Then, before we realise it, we can make out shapes beyond the end of the tunnel, we keep walking and we suddenly find ourselves in the outside world with the wind on our faces.
The Hindsight Bias
Just a quick one before moving on, it might be worthwhile to touch upon the Hindsight Bias. This is usually applied to History: where in the past, it seems very obvious which factors contributed to the events that unfolded in History. It seems clear, in hindsight, that certain financial instruments and sub-prime mortgages contributed massively to the financial crash of 2008. Was an imminent crash obvious at the time? No, of course not. To look back in History is to remove all the noise, and to see just pure cause and effect.
In the same way that the hindsight bias applies to history, ‘development bias’ (just made this one up) applies to the production of a new product. It might be clear how much complexity is involved in steps 1 and 2 of the AGI task list, in hindsight, but it is still very unclear regarding the path ahead for step 3. It is easy to see, in hindsight, how to create an object when the path has been set out. During product development, however, there are a multitude of attractive looking paths for creation, yet we never know which one can be successful until we walked that path.
Create an artificial neuron (= the transistor). Uncheck.
We humans work by analogy. An electric circuit is a water pump: the water represents electrons being pumps around the circuit. The mind’s neurons are a logic gate: they can either be on or off.
All analogies and models have thresholds at which they break down, however. Analogies of the brain are arguably getting stronger: from clay and water during Greek ages, to being more like a mechanical machines during the industrial revolution, to being like a computer chip in our current era. However we still underestimate the complexity of the brain by over-simplifying to an analogy we can understand, and this has tricked us into believing we can seamlessly just replace a neuron with a transistor. A neuron can be turned ‘on’ from being given an electrical pulse greater than a certain voltage, however this ‘on’ state is temporary. A neuron has no way of storing information in an on-off state as the transistor does: as a result, both components use completely different mechanisms to operate. A computer, as we know them at the moment, will never be able to operate like a brain. This means that we only have 1 of the 3 boxes checked, and we’ll have to traverse the lands of unknown unknowns in the search of our artificial neurons.
The Application of Unknown Unknowns to the AGI Task Hierarchy
“Fail fast, fail forward” – Silicon Valley Mantra
Tasks that have never been undertaken before are unknown unknowns. We known roughly how long it would take to undertake tasks that others have achieved before, because we can gauge how long others took to do them. However with the unknown unknowns, all we can do is to fail forward: to work in a way that moves us from not perceiving what we don’t know in order to achieve the task, to perceiving what we don’t know. To be cartographers of our own map of knowledge: sketching out in ever more detail the areas we’ve acquired and the areas we haven’t acquired. The areas yet to be discovered are dark and potentially limitless in size.
This brings us to task no. 3 for creating AGI. We just don’t know how long it will take us to replicate a human brain, due to what we have yet to achieve. We still only understand a tiny fraction of the complexity of the human brain. As well as this, machine learning is still in it’s relative infancy. We might argue that based on our current understanding of quantum mechanics, and our ability to manipulate at the nanoscale, we’re fairly close to understanding the brain, but that’s assuming that we already have the broad strokes of what is involved filled in. There might be a whole scientific discipline that we have yet to master before we understand the inner mechanics of the human brain, and we don’t even realise we don’t know it yet.
It seemed like a small step, back in 1960, from transistors to AGI. With hindsight, we can clearly see that there was still so much to learn that the scientists of the time hadn’t even perceived yet. Looking back, we can see the path needed to walk for the development of AGI has been far longer than previously anticipated. It might be tempting to think again that the step from where we are to AGI must be small: our success is almost palpable. But in reality we just don’t know how far we still have yet to go to make an AGI perform in the same way as the human brain.
Yet just because our brain performs as an AGI, this doesn’t mean all AGIs need to behave like a brain. To induce – from seeing a swan that is white – that all swans are white, is a fallacy. Are there other ways to create an AGI?
More Than One Way To Skin A Cat
We can use task break-down again, but this time to break down the steps taken by an AGI to perform tasks. If we break each task that an AGI (a human) performs down enough, we will start to see that even the most complex tasks are just the combination of many simple processes. We can break down the task into many different sub-tasks that an individual ANI can perform.
This bottom-up ANI amalgamation is the strategy Google employs in hopes of achieving an AGI. A single ANI is fairly limited by itself in scope, but it can perform a specific task. Maybe it can defeat the best chess players, or maybe it can just tell you what time it is. Hook this ANI up to the internet, though, and it can be used when needed. If a second layer is placed over the top of all these ANIs – a layer which is able to assess the task, then access and apply the appropriate ANIs to complete this task – we have just made our first AGI.
With this form of AGI, we may be able to class invention as the joining of many objects that already exist – ANIs – to form something new: the AGI. The top layer part of the AGI which judges the task and accesses each ANI would still need to be developed, but even this would be more like an adaption of already existing technology. A prediction may not be able to be made with completely accuracy: there are still areas outside the view of the analyst. Yet if the amount of ANIs necessary to form a functioning AGI can be counted, and the speed at which the ANIs which don’t exist can be created can be estimated, a prediction could be made. Perhaps the next question is: is it possible to calculate how many ANIs are needed until an AGI is created?
To Be Continued…?
So maybe the first AGI won’t pop into existence. It will crawl slowly into view, assimilating more and more ANIs until it is able to do any task that a human can do. No doubt, in time, it will continue to grow different ANI limbs in all directions, until it has completely surpassed humanity.
I’m all up for “following your heart” sometimes. But sometimes… we just shouldn’t. There have been a few instances recently were I’ve relied on my intuition – my feelings – to make decisions and after weighing up these decisions in hindsight, I’ve realised they weren’t the best option to choose. I’d like to share these instances with you to highlight how easy it is to make flawed decisions.
1. I went to a conference recently to learn about Vacuum gauges, vacuum pumps, how to create a vacuum etc. The conference was given by one of the experts in the industry (who was high in the hierarchy of a vacuum company) so I thought it would be worth-while listening to the conference. I brushed up on my knowledge with a pdf I found on the internet by the company this guy worked for before the talk. It was pretty comprehensive – 134 pages (I didn’t read all of it), but it perfectly explained everything I needed to know. So I read this pdf for a few hours and then set off to the conference. When I sat down and the expert started talking, I realised he was using slides almost directly taken out of the pdf. However, this information now was coming from an expert: he would still fill in a few interesting bits here and there, right? And I didn’t know it inside out. At the end of the day, I felt like I had got a lot done: I had gone into Glasgow Uni, made it to a vacuum conference to listen to an expert talk. Productive day!
In hindsight, though, it took about an hour to get from my room to into the conference hall (so 2 hours travelling there and back). Commuting somewhere might feel like getting things done – “I’m moving forward!” – but actually it’s sunk cost that you accept because the value of the thing you’re travelling to is still greater than the cost. Parking cost £4.40. And I listened to this lecture for about 1.5 hours. So that’s 3 hours of time for 1.5 hours of learning. Compare this to the amount of time I could have spent learning the information at home – 3 hours for 3 hours, it starts looking like I should have stayed at home. The weird thing is, I don’t feel like I get a lot of stuff done if I’ve just sat at my computer all day. It doesn’t have the gravitas of telling yourself, “I went to a conference to listen to an expert.” The problem is that I’ve attached some inherent value to “listening to an expert,” where really there is none. The value is in the content that the expert can get into our heads. I somehow feel that because this guy is an expert, he can teach me more content than just a pdf. And the conference makes it seem important. I’m an important person with all these other important businessmen, gleaning some knowledge from an expert. You can’t feel that important sitting at your computer at home, reading a pdf.
2. My route home can take me either two ways: a longer route where I can move more quickly, or a shorter route that goes through a road with huge speed-bumps that I need to take at around 5 mph. I always go the fast route: it just feels like I’m getting home quicker: I’m going faster, after all!
I recently found out, though, that the speed-bump route is quicker. Even with this new information, I still sometimes don’t believe it and go the route where I can travel quicker – it just feels quicker.
I’ve even heard of some people, waiting for a late bus, who actually walk to the next bus station so that they can get the bus further down the line. So that that they can catch the same bus just from the next station… only the walk to the next bus station makes them feel like they’ll get to their destination quicker.
Our feelings/intuitions can be sneaky. And maybe the most sneaky thing about them is that because they’re decisions made on feelings, we don’t consciously realise we’ve made them. We don’t rationally study the decision making, because it’s a case that’s been opened and closed within the intuitive realm. It’s only when we either catch ourselves making them, or look back in hindsight, that we can see.
How many people, do you think, go on Facebook, Instagram or YouTube in a mindless trance: clicking and scrolling through their news headlines, without any real reason for doing so? What percentage of people who click on on Facebook does this behaviour account for? 50%? 70%? 90%? What has created this behaviour? Do they do it because they’ve unconsciously created a habit that gives them a little “dopamine hit” when they check their newsfeeds again and again? I don’t believe so. I believe that they do it to distract themselves from their feelings of insidious and unrealised depression. But why are they depressed, I hear you ask? They’re depressed because they live in a consumerist world that continually reaffirms what they don’t have, rather than what they have. Every advert, slogan, piece of marketing tells us that we don’t have something which will make us happy.
Rather than spend our time with distractions – the pills we take to try to hide our symptoms – we would be better off spending time to treat the cause.
If, when our minds are still and we have nothing with which to distract ourselves any more, we ruminate back to the things we don’t have: of course we will be unhappy. We start to think of the things we don’t have, and we tell ourselves we’re unhappy. And then we’re unhappy, we remind ourselves why: because of the things we don’t have. As long as we’re in this cycle of thinking, we will forever be unhappy.
The trick to end unhappiness isn’t in trying to stop our journey towards unhappiness. This will always bring us back to stopping the reasons for why we’re unhappy, with two solutions: trying to ignore them (distractions), or satisfy them (but there will always be more reasons for why we’re unhappy waiting in the sidelines to replace those we’ve just satisfied). The trick to end unhappiness is to start our journey towards happiness. Start remembering all the things that we’re grateful for in our lives. Remembering all the things that make us happy.
Humans are silly creatures. We’re always focusing on things we don’t have. We create plans – some of which span years of our life – to try to attain things that we haven’t got. We spend years of our lives, building companies and working in companies to work towards some great vision that resides in our own heads. And when those plans are successful, we have more plans. More incomplete goals.
If we’re constantly focused on our progression, it’s easy to miss what we’ve achieved. If we’re constantly reminded of what we lack, it’s easy to become blind to what we have.
I’ve been sporadically returning to the same book for 10 years now, and every time I read it, I learn something new. Why? Because the context with which I read the book has changed.
If you present a skyscraper to an ape, he will see a big rectangular object. If you present the same skyscraper to a human architect, she will see the steel material choice for structural support, the stresses in each of the joins, the beautiful glass exterior that shrouds the object.
What is the difference between the ape and the architect? They are looking at the skyscraper with different context. Years ago, I read the book and I understood parts of it. I could see the shape of the object. But I was still an ape. Then I went out and saw the world, I gained more context to revisit the object and see it in a new light. Every few years I revisit the book and view it in a new way. With greater context, I can see the nuances of the book. I tend towards the architect. We can never truly become a complete “architect” (this is where the analogy breaks down a bit), because we will forever have wisdom to be added to our context with which we see the world. Still, it’s refreshing to think that we wake up (or live, moment to moment, depending on your increment) each day to view the world in a different way than yesterday.
I recently had a drone – which I had spent many hours building – stolen from me. Not very fun. But it has also been an opportunity to revisit the feeling of loss, the feeling of negativity towards the world due to loss, and the process of working through these feelings. You can treat it like a story – a story through the mind as different cognitive destinations are reached. These have then been condensed into stages… and what can be found below is the result. I don’t really seem to align well with the more classical “5 stages of grief”, it seems.
1. Feelings of negativity towards the rest of humanity (generalisation). Twenty minutes ago I was flying around, stopping (just in case I flew too close) whenever some dog-walkers came past. I would smile at them and they would smile back. Now I look at the same people with a frown, thinking, “are you the thief?” When it could have been anyone to have wronged you, it’s easy to look at everyone as if they possibly did. To lose faith in everyone, because you decide the only reasonable action is to trust no-one.
2. Feelings of negativity towards a small sub-section. After a few minutes of thief-generalisation and a loss of faith in humanity at large, I came to another conclusion: even though I don’t know who did it, that doesn’t mean I can’t trust anyone. Not everyone would steal the drone, only a select few. It’s important not to project the behaviour of a few dicks onto the rest of humanity. Just because – sadly – there is no huge neon sign above each dick who is sneakily integrated into the rest of society doesn’t mean I should imagine everyone with a neon sign over their head.
3. Choosing not to dwell on negativity. I enjoy reading books regarding our relationship with ourselves. Some people give these books labels associated with a lot of questionable attributes such as “self-help”, or “spirituality”. Daniel Goleman has recently successfully rebranded them as under a more respectable (and approachable, for sceptics and scientists) category of Emotional Intelligence. Goleman teaches that emotional intelligent people have learnt how to control the amount of time they spend ruminating: that is, the time they spend focused reliving negative emotions that don’t lead to anything fruitful. One of my favourite ‘spiritual’ teachers, Anthony de Mello, goes into it more deeply. He says, “understand that the feeling is in you, it is not in reality. No person on earth has the power to make you unhappy. Only no-one told you that. They told you the opposite. Rain washes out a picnic. Who’s feeling negative, you or the rain?” Negative feelings happen due to our judgement of the event and our expectations. I was emotionally attached to my drone so to have it torn away from me is a painful and unexpected removal. I want my toys back. I expected to be able to keep the toys that I bought myself. But reading de Mello allows me to re-frame the experience in a different light. I can enjoy the picnic while it lasts, but if it unexpectedly rains, there’s no point being angry at the rain because it didn’t do what I expected (and hoped). Note that I’m not saying that I should allow myself to be a victim, and I that shouldn’t make efforts to mitigate the rain with, say, an umbrella (I hope you’re still somehow following this analogy). It also doesn’t mean that we shouldn’t, as a society, stop trying to minimise the number of instances of ‘the rain’ happening. But, regardless of all our efforts, some events are out of our control. And after all is said and done, does this excuse the behaviour of a thief? No. Which leads to step no. 4.
4. Revisiting negativity. I’ll be honest, a lot of my teenage years were spent thinking (read: ruminating) about the motives of dicks. I was compelled to know: why did they do it? I probably could have had a happier childhood. But at the same time, there’s something to be said for working through a painful thought process for the purposes of insight. I have a huge respect for people who choose to tackle these problems head-on (by going into professions such as mental health, nursing, etc) instead of putting their heads in the sand under the excuse of “not ruminating”. The conclusion I’ve come to is that everyone, to a certain extent, can be selfish and thoughtless towards others from time to time. The intensity and frequency of these events vary between individuals. But the law of nature is like this as a whole: the lion doesn’t consider the gazelle’s feelings. The lion considers whether he selfishly wants to eat or not. Humans have, to a large degree, been able to transcend this primal natural law by creating a society with its own set of laws. It’s really amazing how malleable the human mind is, to see it come from primitive ‘lion’ origins and then over years have it hammered to conform to view the world through the laws of society. And we can see this in kids: because kids are dicks. The kid who punches another kid in the face because he’s getting more attention. The cruel and thoughtless remarks. Kids stealing possessions from other kids, “because he wanted it.” But then, over the years, something magical happens. We can slowly watch as the once-dickish kids are hammered more and more to society’s vision of how to behave, and amazingly, the number of dicks get smaller and smaller. Indeed, to unconsciously see theft in such a negative light after revisiting how far we’ve come as a species is a testament to our ability to relabel a once “natural” action as “wrong” (I hesitate to call it a “natural” action, because the word “natural” is associated with positive traits. There is nothing positive about theft. Our ability to adhere to the less “natural” rules of society – which are, at the end of the day, a human construct – is nothing but good).
5. Humanizing the thief. As I said in the last point, we’re all dicks to some degree at some point or another. I’m definitely guilty of watching an illegal movie. I.e. I’ve stolen movies. So in some way, I can associate with the thief. Does that mean I also deserve a big neon dick sign over the top of my head? Well, I’d argue, “no”. My drone was a personal singular asset that had an emotional significance to me. The film is an infinitely reproducible digital product that cannot, in the same way, be stolen. The impact: emotionally and economically, is a lot more acute with regards to the drone. Does that excuse what I did? Not really. I guess I’ve stolen a service which the film provided me that I should have paid for. I’ve been a small part of a collective movement that could be destroying film. It made me think about how I can live my life better, when trying to align my actions to my values. Maybe I deserve a teeny tiny dick sign over my head.
6. Focusing on being a winner, as well as not being a loser. There’s one common theme about all of the 5 steps above. They all focus – by either trying to avoid it, extinguish it or understand it – on loss. Even if I was 100% successful in avoiding negative feelings associated with loss, I would only be completely neutral on an emotional spectrum. I’ve avoided the stick, but being completely focused on avoiding the stick, I’ve failed to notice the carrot. The carrot is gratitude. I’m grateful I still have my health. I’m grateful I have the means to buy another drone. I’m grateful I’ve learnt the skills to build the next drone faster, and potentially better. And I’m grateful that I’ve hopefully learnt a lesson on how never to get a drone stolen again.
What’s the most valuable thing I’ve learnt since starting work at a start-up? To be a better Keeper Of The Vision.
When I was a teenager, my parents sent me to a (relatively cheap, but still) private school. I was very aware of the the long hours my parents were doing to pay for me to go to school. On top of this, I wanted to compete with my peers: if I have an anxiety about anything, it’s that I’m seen as stupid by my peers. So I felt this thick, heavy pressure pushing on me to work hard at school. Even then: I showed a lot of inertia against this pressure. One of my best friends taught me the whole History GCSE syllabus in 2 hours before the exam because I hadn’t prepared for it. Luckily, he was a genius (I got a B & he fittingly went on to become a teacher).
When I went to Uni we were told that we now had to work a lot more autonomously: it was down to us to manage our work-load. This was true, relative to what things were like at school, but we were still carefully pushed back onto the right path if we strayed too much. The lecturers had a ‘you can lead the horse to water, but you can’t force it to drink’ approach… but at least they led us to water while we decided whether we wanted to drink. We were given deadlines, told off if we were lagging behind. And there were still my peers to compete with. I still felt the pressure weighing down on me and pushing me on to work and achieve good grades.
After Uni the pressure changed from the consumption of knowledge to production of valuable products/services. The lecturers were switched with managers, grades were replaced with performance metrics and profit income, but I still could feel it if I wasn’t pulling my weight. Departments were praised or scolded for their contributions to the company, deadlines were still pushed down on from above, managers were saying things like, “we haven’t achieved targets and we will need you all to put in overtime to make it up.”
Then, 3 years after starting my first serious engineering job, I was invited to help a friend of mine start his start-up (“start-up” still feels like a weird word for bits and bobs that we do out of the flat: we’re expecting to launch our first product next year so departments like Sales, Marketing, Finance don’t need to exist yet: they’re integrated into our own work when/if we need to do them. The majority of the work goes towards developing our product).
I moved up to Scotland into the flat we were conducting work out of and prepared to start work. And I felt something was missing. The managers, the deadlines, the pressure to keep up with peers, the performance metrics: they were all gone. I found myself floating in a vacuum for the first time in my life.
I’ve always considered myself passionate and intensely driven: we only have one life, I don’t want to look back at mine to see 20-something-year-old me watching Netflix and going to clubs that take a few days to recover from. I want to ‘put a ding in the universe’, as Steve Jobs says – or at least give it a good attempt. But after moving up to Scotland, I’ve realised that up to this point I’ve pushed myself forward by ‘being driven’ but still while having the safety wheels on. The safety wheels have been my parents, peers, lecturers and managers that have been a pain in my ass and pushed me to work harder: that pressure which had been weighing down on me. It’s relatively easy to work hard when you have good managers holding you accountable, and then to retrospectively link it to because you’re ‘a driven person’.
In the first few months of starting this new work, I really struggled to motivate myself. I was floating without direction, trying to push myself along by telling myself “I want to put a ding in the universe”. But that was only marginally successful. More times than not, I found myself waking up at around 10 am and then watching YouTube videos for a few hours, finally pulling myself to do work for a few hours. Self-motivation is hard.
Why wasn’t telling myself “I want to put a ding in the universe”, or “I need to do work today” working? A lot of people have told me, “you need to apply yourself more!”, but when I tell myself, “I need to apply myself more”, it never seems to have much affect. Why was this?
What I’ve come to believe is that self-application is only a symptom. There are two causes of application: the pull of a vision – which you will do everything possible to nurture into existence – and the push of pressure from managers breathing down your neck and peers to keep up with. Both these things need to be palpable: not some airy dream like ‘putting a ding in the universe’.
Steve Jobs is known for a lot of things. He’s known as someone who got all the credit while being able to do none of the technical work in producing Apple products. He’s also known as another name, “The Keeper Of The Vision.” He knew what was possible with the current technology, he had a worthwhile goal of making something that he believed in, and he shared that goal. He wanted to revolutionize the phone industry, and under that magnetic pull of such a grand vision, he motivated all of his engineers to create something great. He was also known has a monumental asshole: i.e. he applied a huge amount of pressure through negative reinforcement on his employees to get the most out of them.
Elon Musk is another person who does this really well. His visions for a green energy planet and multi-planetary civilizations are captivating. Especially for sci-fi engineers who dream about what the future could be like: and how to create it. Because of this, he has the cream of the crop for employees: a huge amount of people want to come under the company because of its vision. Musk is also known as not just a micro-manager, but as a ‘nano-manager’, as well as a real pain for anyone who gets in his way. He’s known for shouting to a late supplier down the phone, “you’re fucking us in the ass, and it doesn’t feel good”, along with a tirade of abuse. Again, Musk is a master of creating push and pull to motivate himself and others. He’s primarily an engineer, but lately he’s been hailed as “The Greatest Salesman On Earth”, and I wouldn’t disagree. As of writing this, Tesla’s market cap has recently surpassed those of both Ford and GM, even though Tesla have actually made a loss for the last 10 years straight – selling approximately 100,000 cars a year – while Ford and GM make billions of dollars selling approximately 6.6 million and 10 million cars respectively. Why is everyone pushing to buy a share (at an ever increasing market price) of a company that’s losing money? Because Musk is a master of selling his idea not only to his employees but also to his stock holders.
So where does that leave me? In the last few months, the most valuable thing I’ve learn is how to be more self-motivated. I’ve learnt that it’s imperative to be my own Master Of The Vision. Not to say, “I need to apply myself”, but to think, “what do I really care about and how to I bring that into the world”, and be pulled by that vision to apply myself to do the work. Instead of saying, “I need to do work today”, I need to show myself what the future looks like – 8 months from now – if I don’t get the work done. Most likely, both myself and my friend will be out of a job and I’ll be wondering why I can’t get shit done. That’s a pretty depressing story, and the negative push of that story will also help me do work. I need to tell myself the story of what will happen, not just its implications. Harry Potter is pretty boring if I just tell you the conclusion, “Harry beats Voldemort.” In the same way, the story I tell myself is the object to be studied: self-application is just the boring conclusive shadow that is cast from a good motivational story.
Sometimes the story we’re emotionally engaged with is too far removed from our day-to-day tasks to feel motivating. When the gardener and lawnmower for NASA was famously asked, “what do you do for a living?”, he replied, “I help put spaceships into space.” I love this answer, as it’s easy to become unmotivated because your vision is so far removed from the day-to-day work you do. But most of life isn’t the final Rocky-esk fight at the end of the movie, it’s the montage. I think it’s important to remember this when we still don’t feel like working because we’ve created a moving story but have trouble engaging with it. I still struggle to motivate myself every now and again, but I’m definitely getting better.
“The Answer to the Great Question… Of Life, the Universe and Everything… Is… Forty-two,’ said Deep Thought, with infinite majesty and calm.”
“Forty-two!” yelled Loonquawl. “Is that all you’ve got to show for seven and a half million years’ work?”
“I checked it very thoroughly,” said the computer, “and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you’ve never actually known what the question is.”
As arguably the most famous quote in his popular book, The Hitchhiker’s Guide to the Galaxy, Douglas Adams suggests that it is just as important to know the question as the answer. He echoes Voltaire’s sentiment made 300 years prior to Adams of “judge a man by his questions rather than his answers”.
Firstly, lets address the elephant in the room: the question asked in his book is way too ambiguous and doesn’t really point to any specific possible answer. It’s like asking, “what is the colour red?”: it’s only half a question. The second part of the question needs to narrow this down to a point towards a specific answer, like, “what is the colour red in terms of wavelength of light?” Answer: 600-700 nm. Easy (we can be pedantic about the exact wavelength at which red ends and orange begins, but that is more of an exercise in definitions rather than science). In this way, yes, Adams does indeed show that asking a question with a specific answer in mind is vital. But by making such a poorly constructed question on purpose, I think he’s masking a more fundamental, interesting aspect of the problem. An aspect that is a lot more common than a poorly asked question. This aspect, I believe, has more to do with the answer, and the foundation on which that answer lies on.
Let’s assume that Adam’s question was actually properly constructed, like in the paragraph above. From there, asking a question like, “the Great Question of Life, the Universe and Everything” implies that we have all the pre-requisite knowledge to understand the answer. I cannot ask, “how does salt form crystals” if I don’t understand the foundational science on which the answers rests. You’ll only reply, “it creates ionic bonds between sodium and chlorine in a repeatable face-centred cubic lattice,” and I will be none the wiser as to what you meant. Yes, the initial question is still important as it implies a certain level of pre-requisite knowledge, but this implication leads us now to see that the problem is a lack of pre-requisite foundation, not the question. We do not blame the ocean for containing waves, consequently tipping us out of the inflatable lilo we were trying to float peacefully on top of. We blame the speedboat that hurtled past us a few seconds ago which had thus caused the waves.
Mapped out above is a hierarchical representation of the foundation on which our answer rests. We have three pieces of knowledge which form that foundation. Those pieces of foundational knowledge then are held up by their own pieces of foundational knowledge (not shown in the image), and back and back we go, like an annoying five-year-old who keeps asking his frazzled mother, “but why?” As Elon Musk says, “we need to get to first principles”, and I believe that viewing knowledge in this way is what he means. Now, I hear you ask in a slightly maniacal way, “well then where does it end?!” But don’t worry, I believe that there is an end, and it is when we manage to get down to a strong, stable foundation.
We don’t build our house on a weak foundation of sand: it’ll just fall down given the smallest push. In the same way, we don’t house ideas on a foundation that is weak. A weak intellectual foundation is one where we require more knowledge to understand the topic to a suitable level. There are a lot of areas of knowledge that have solid foundations on which we can learn new things, though. Apple have becoming a billion-dollar company on this fact. Do you need to know the thousands of lines of code, which come together in complex ways to eventually form the iOS on which all Apple phones run? No, of course not. We just know that the operating system exists and acts as a foundation on which we can download apps and use our phones. “It just works.” Yet it doesn’t work by accident: it has been meticulously engineered so that the users need as small amount of pre-requisite knowledge as possible.
In the same way, we have natural solid foundations in the hierarchy of knowledge that allow us to grow our own knowledge on top of. If we were to create a basic electrical circuit, we would need to know that electricity conducts, a few laws (like Ohm’s law) and a few other basic bits about the functionality of core components for circuits (where the power comes from, where it goes). We need not know any more. Even though the discipline of electrical engineering is founded upon the movement of electrons through conductors/semiconductors: this deep knowledge of electron movement is mostly unnecessary for the purpose of creating circuits. We have found a stable foundation.
We need a foundation of knowledge to understand the answer to a question, and that foundation needs to be stable. So in the same way as the answer, “it creates ionic bonds between sodium and chlorine in a repeatable face-centred cubic lattice,” “42” might well be the answer to the Great Question of Life, the Universe and Everything, but if we don’t understand the foundation of knowledge on which that answer rests, the answer is meaningless. It is the responsibility of the questioner to be aware of whether the answer will hold any meaning to her or not. She must be critical in evaluating whether she has the correct foundational knowledge. And herein lies a more difficult problem with the answer of “42”. Not only does she need to continue probing until she has found a stable foundation, what happens when she doesn’t know what foundational knowledge she doesn’t know?
We’ve all had an instance in our lives when we need to buy a product, and we just turn to the most known name in the industry. Or maybe we instantly refer to the name that’s most prestigious. We don’t look at what it can do on paper, we just trust that the product beats its competitors because of the badge that it has associated with it.
I’ve been looking at buying a Ducati recently. I put a few tentative bids down on eBay and I bought a book that describes the development of the specific bike I’ve been looking at. I got really engrossed into how they created the bike, how everything they do is derived from two principles: handling and power. How they pretty much build the whole bike around the engine, and the heritage they have with L-twin engines.
And then a weird realisation happened. I realised that I was buying the brand more than I was buying the product. I was buying the story of the bike and the association with Ducati, more than the technical ability of the bike. I had shifted from my product focused philosophy to a brand focused one.
Up until now, I’ve rarely cared about a brand. I’ve always judged a product’s merits based on it’s ability alone – untethered to where it actually came from. This is the process of making a decision using logic to quantify the specification of the product vs. it’s cost. The product with the most “bang for its buck” wins (i.e. specs:price ratio). And I still think this is the correct approach if you’d like to be rational. On paper, the Ducati is seriously overpriced relative to bikes with similar stats from other manufacturers.
But other people judge a product’s merits, less on the actual product, but more on where that product came from. They have “Brand Focus”. This isn’t necessarily a bad thing – it’s not a completely rational thing to do in my opinion – but there is definitely an intangible value about owning a specific brand: a bit like art has intangible value. You can’t rationally derive the value from specifications: speed, braking ability, durability etc; rather you start deriving value from how the product makes you feel. And that, in my opinion, is a slippery slope.
Famous stock broker Warren Buffett calls these two ways of valuating ‘intrinsic value’ (product focus) and ‘book value’ (brand focus/how much the stocks are actually selling for, regardless to how much their calculated worth is).
Regardless of all that, however, is that Ducatis seem to really maintain their value well. With a Ducati, its perceived worth is not detrimentally affected by lacking in specs: it holds the price it was originally sold for simply because it is still a Ducati and people continue to perceive it as valuable. This is also an apparent fact when buying stocks. There’s the worth of the company based purely on stats: it’s assets. But then there’s the public opinion on how much the company will grow: which is pure speculation and creates the perceived worth of each stock.
So, in the end, does it really matter where the value is derived, as long as it’s stable and predictable? To some extent, I think: yes, it still does. Brands can fall out of grace with the public: people might start to perceive Ducati as less luxurious. And public perception of a company’s growth can change overnight: and with it, the perceived worth of a stock. Stats are less fickle. Horsepower won’t change overnight unless someone gets a wrench to the bike (or mistreats it). Will that stop me from buying a Ducati in the future? Only time will tell…
I wake up to read an article on Trump this morning.
“It appears to be a recognition that Mr. Trump’s simplistic and angry campaign rhetoric may be much more difficult to accomplish.”
We all want simplistic ideas. But we live in a complex world. With complexity comes difficulty. Difficulty brings doubt. And in a complex world, doubt is not a pleasant condition… but certainty is absurd. When will we learn not to be seduced by overly simplistic, overly confident ideals? When will we learn to become comfortable with a complex system: when we have to actually research what we’re jumping into before truly jumping?
Maybe the discrepancy lies in the scale of the task. Normal, every-day people don’t usually have to worry about how to overcome hugely networked, complex tasks. Normal every-day people tend to have to work out whether they should plan their dinner with friends for Friday or for Saturday.
We develop different problem solving tools throughout our lives based on the tasks we face. If all we’re doing is planning whether we should have dinner Friday or Saturday night, we’ll only ever develop the tools to overcome that task. On top of that, the implications at stake with this task aren’t that great: say you organise the dinner for Friday. If everyone says they can’t make it, you can change the dinner to Saturday. Even if you screw it up… you can just organise it for another weekend. The idea of creating research groups to study the full extent of whether Friday night or Saturday night is better, or to consult all the ‘stakeholders involved’ about the full implications of each nuance for the choice between Friday and Saturday probably sounds like overkill. And it is.
But when it comes to the direction of a government, we need highly developed tools and processes to overcome highly complex tasks. Millions of people’s lives can be affected, and yet it feels like we treat these problems like choosing what night to organise dinner. It’s like we’re using a sledgehammer and a chisel to change the fillings in someone’s teeth.
So now we have two choices. We can choose to equip everyone with the correct tools so that they are able to assess a problem and maintain the democracy we have. This will take people years to achieve: they’re essentially learning a new skill. You can’t become a piano master overnight. On top of this obstacle is the fact that not all people will want to put in the work to become a ‘piano master’.
The other choice is that we can start picking specific people who are equipped with the skills to actually assess a complex problem properly, and assign them responsibility to decide what to do.
Very extreme conclusion: maybe democracy isn’t the answer. Maybe it’s time to apply a more suitable tool for the job.