Money must not be worshipped. Wolfgang Huber

Dangerous minds

On the dichotomy between belief and action.

As a species, we are faced by global threats and potential existential risks. An existential risk, as defined by Oxford University philosopher Nick Bostrom, is a situation “that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. An existential risk is one where humankind as a whole is imperiled, with major adverse consequences for the course of human civilization for all time to come.”

One might ask: Well, what kinds of things constitute existential risks? We might consider a similar impact faced by the dinosaurs, but in reality there are possibilities with much higher probabilities than an asteroid or comet impact. What happens if political situations build up to a true nuclear holocaust? What about the misuse or poor handling of the technologies that we predict to develop in the next century? Artificial intelligence, nanotechnology, and biotechnology all have as much potential to be a global threat as they do to be a tool for global good.

I started thinking about existential risks a few years ago after I picked up Nick Bostrom and Milan Cirkovic’s tome Global Catastrophic Risks. My first initial reaction to considering a vastly dystopian and possibly finite landscape for myself and future generations got me thinking, well, there must be a huge number of people working on mitigating these risks. In reality, it would take less than a minute to count the total number focusing on this sort of research.

Career choices vs. beliefs

A few months ago I was at the 2014 Effective Altruist Summit, listening to Eliezer Yudkowsky of Berkeley’s Machine Intelligence Research Institute discuss – in quite possibly the clearest terms I have ever heard – the threat of unfriendly artificial intelligence. As I was taking my seat in the auditorium, I had noticed an engineer in his late 20s (who I’ll refer to as P) whom I recognised from an artificial intelligence conference the year before. When the presentation finished, I turned to reintroduce myself. We spoke of the need for friendly artificial intelligence, as well as the questions that had struck us in Nick Bostrom’s latest book, Superintelligence: “What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?” We chatted about the acquisition of Deep Mind, the role of the Machine Intelligence Research Institute, and the effects and trends of a potentially blossoming AI ecosystem.

When we hit the ecosystem part, my passion levels hit high. The question I asked was: “Well, why is the ecosystem so small? Why are there so few people in the world even considering potential global catastrophes, let alone attempting to mitigate them with the abundance of technological and cognitive power we already have?” P nodded along – yes, we need to do something about it. And then, conscious of the fact that I had been ranting for about 15 minutes, I asked P if he was enjoying working in these areas. He replied: “Actually, I’m just interested in all this stuff, but I’m working at a consumer tech company as a software engineer.”

My initial reaction to P’s life choices was utter shock. How could he sit there and discuss – for over 30 minutes – how much he recognized the extreme importance of friendly artificial intelligence, backed up by his knowledge in the area as a computer scientist, as well as the need to boost the ecosystem, and yet go on to tell me his career choice was distinctly incongruous with his beliefs on the topic?

This wasn’t the first time such a conversation had shocked me. In fact, this had happened all throughout my life, costing me many interpersonal relationships, as I failed to understand – time and time again – how people could seemingly agree with all the issues presented but continue to work on totally unrelated matters, leaving the discussions on the “philosophical” side as opposed working on the practical one.

Of course I recognize that there is difficulty in pragmatism. Bostrom’s book may give people the knowledge that the threats around superintelligence are an issue, but the problem is that he doesn’t and probably can’t give enthused readers the architecture to coordinate action based on that knowledge to affect the outcome.

Isolating the most fundamental problems

I have been considering the complexity around risk mitigation for the last two years. It has been an attempt to find clarity in a web of systems that all interplay to prevent more positive action. I originally thought that too few individuals were aware of these issues, so whilst living in Germany I created a discussion group called Berlin Singularity in a bid to start the conversation – especially with business and finance students.

After a year of lecturing in Berlin universities and organizing events through Berlin Singularity, I realized that the information gap was part of a much bigger core problem. Did any of my students go on to work in areas in which their goal was to maximize positive impact? Did any of them create start-ups to tackle problems bigger than consumer ones? Maybe a few, but more than often I was merely the crazy lecturer who contradicted the other faculty who thought the biggest problem in the world was convincing humans to buy more stuff.

The problem we might intuitively consider is that there aren’t really jobs out there that combine value and business. What did we expect from students…to go into philanthropy or further academic research? To transfer to philosophy and work out what was good first? Only a handful of my fellow philosophy graduates seemed to to be able to resist career decisions purely based on salary size. How would I prevent these business students from losing their interest in my class and going on to take jobs at terrible German start-up clones that would promise them a relaxed environment, an endless supply of free hoodies and a yearly income of 100k?

Are they supposed to create these value/profit jobs? Can anyone even be an entrepreneur? It seems hard enough to get extremely successful start-ups going in any area, let alone when the game plan is to collectively work towards a better future for the rest of humanity. Can anyone save the world?

Risk and fear

Human beings are naturally risk averse. Risk is the unpredictability of outcomes. Averting risk allows us to create a more predictable model of the world, which creates a comfort value in which we then superficially believe, thinking we comprehend the way the world works. But admitting that we don’t understand the way in which the world works, and then trying to understand some slice of it, can only be terrifying. It’s far easier to inconclusively accept the world model of others. It’s even more comforting to then conclusively justify the truth-validity by the volume of people who share that world model. To attempt to stand outside of viral ideas – mimetic beliefs – and to take an assumption-free approach at understanding the world is one of the hardest challenges faced by individuals today.

Attempting to look at world models with an assumption-free approach is definitively frightening. Resolving fear with the gathering of appropriate information is not plausible in every situation (nor does the information necessarily always exist), but in the instances when we feel fear, there are benefits in reframing it in our minds as an absence of knowledge.

When I feel afraid, I turn the thought around: “I must get to understand this situation better.”

If I write down my list of fears, I think of the information that would resolve them. Sometimes the information I need to handle the situation isn’t known: I don’t want to die, but I don’t currently understand the exact pathway to overcoming the failure of my biological system. But in order to resolve the fear to some degree, I strive to understand the potential situation better and act on improving my outcomes.

The real existential risk

Characters like P may recognize that there are systems. He may also recognize that they are messed up. What he doesn’t believe is that he can trick them, or influence them. P thinks there might be some people who can (or even not), but definitely not him. He is afraid of unfriendly AI, but he does not recognize the lack of information or the necessity for him to contribute to manifesting the missing knowledge in order to mitigate the risk.

The thing that separates P from the rest of the world is that at least he recognizes the problem. He knows about the need to promote the construction of ethical artificial intelligence. But he doesn’t think he can personally do anything about it, so he is relying on others to mitigate the problem.

When individuals aren’t immediately faced with the problem, it’s even more complicated. They don’t know about a problem or don’t think they can do anything about it. Perhaps they think other people can solve the hypothetical problem better regardless of any contribution they could make.

The question I pose then is: Well, is the biggest existential risk not just the fallibility of the human mind? I mean, we continue to ignore threats presented to us, and we go so far as to even create threats against ourselves. It seems that the total potential risk created by the negative impacts of people’s belief systems is larger than any outside existential risk in this world.

When we get to these fundamental challenges around people’s belief systems, we realize how hard the path to safeguarding the future truly is.

Read Newest From Column Riva-Melissa Tez: Thinking about Love

Comments

comments powered by Disqus

Related Content: Artificial-intelligence, Philosophy, Global-risk

Conversation

Medium_1e8a650341

Column

Medium_99ab62ede8
by Martin Eiermann
30.01.2015

Column

Medium_0e78340081
by Bjørn Lomborg
15.01.2014
Most Read