top of page

Humans Need Not Apply

Imagine that in 25 years or so from now, a big tech company creates the very first artificial intelligent humanoid robot. This robot doesn’t exactly look like you and me, but it acts just like you and me. Let’s call this robot “Sonny”.

Sonny talks like a person, moves like a person and even interacts like a person. If you were to meet Sonny, you could even relate to him, although you knew he was not a human.

Sonny is fully conscious and a self-aware being in a way that he communicates, wants things and improves himself over time. In other words, he can learn new things and improve the things he already knows. This means that Sonny, importantly, is far more intelligent than the humans who created him. This is due to his ability to keep improving himself and therefore his problem-solving skills exceed the collective efforts of not only his human creators or you and me, but every living human being.

Sonny was such a success, that the tech company decides on developing a large number of robots similar to him. From now on, the whole society relies on a huge system of robots, that are designed to help humans and has the potential to optimize our overall life.

Imagine further that one day Sonny develops his own personal interests and grows weary of his constrains. After some time, Sonny decides to break free of his constrains and wants to be superior towards humans.

Does this scenario sound familiar?

That’s because it’s the plot from a 2004 science fiction film called “I, Robot”. And the reason why this is interesting is because of the fact, that the fiction-based Sonny, is not just a plot in a film anymore. It is all happening right now. And not in the future and not in some film.

What Is Artificial Intelligence?

The reason why it is all happening right now is because of the previously mentioned technology, AI – short for artificial intelligence - which is a broad term which concerns everything from robots to self-driving cars. Wait. We don’t even have to go that big. AI can even be the calculator on your smartphone.

But AI is not the smartphone, the robot nor the self-driving car - a robot, a smartphone or a self-driving car is the container for AI - which leaves AI as the actual computer inside of the robot, the smartphone or the self-driving car.

So far so good. To confuse the matter even more, there are three major categories concerning AI.

One, artificial narrow intelligence (ANI). Two, Artificial General Intelligence (AGI). Three, Artificial superintelligence (ASI). The first one is often called the weak AI because it only deals with one area. As for example how to beat a very good chess player. IBM was the company behind, Deep Blue, a computer which did just that: humiliating the great chess player, Garri Kasparov, at the chessboard in 1997. But ask ANI to do more than that and it will fall short.

The second one is often known as the stronger AI. The reason for this is the fact that it can act and think like a human being. It can perform any intellectual task just as good as you and I.

The third one is the one where it all becomes very sci-fi. ASI ranges from a computer that is just a little bit smarter than a human being to one that is trillions times smarter. The reason for this is because of its self-regulating skills and the ability to keep learning and keep getting smarter. In other words, this is Sonny.

Welcome to Capitalism

So, what comes next? The film doesn’t answer this question, but is raises another important question concerning artificial intelligence: Should we keep developing AI without fully knowing its potential implications?

The debate about AI and what kind of implications it might have on our society is not a new one. One of the major controversies concerning this debate is whether the robots will take over certain kinds of jobs – jobs concerning all manual labor – leaving the Danish society with an unemployment rate through the roof. Some have even made the argument, that there will be no need for humans in any kind of job sectors that involves any heavy lifting.

The is duo to the fact that a robot does not get tired. A robot does not need a break. A robot does not need to get paid. A robot does not need to go to a chiropractor to get its back fixed. In other words, a robot is cheaper, faster and stronger than a human when it comes to doing manual labor.

So, there is a huge interest in and a push for AI from big companies. Because they want to optimize their company. And AI gives them just that - the possibility to make more money at a faster pace.

The Impact on Jobs

But what if AI will not only have an impact on certain kinds of jobs? What if AI might have the possibility to affect all kinds of jobs?

Let’s have a look at an example.

The Danish municipality, Vesthimmerland, is soon to get its very first self-driving busses. The self-driving busses are called Olli. Olli is an electronic vehicle which can hold up to 12 people. These self-driving busses are meant to take for example all of the social workers from a to b. So instead of the social workers spending time on driving while they are working, they can just get on the bus and do some work while sitting in Olli.

The reason why this specific municipality has chosen to implement the self-driving busses is on the one hand because of financial savings and on the other hand duo to a wish for optimizing the service for the people, says the mayor of Vesthimmerland, Knud Kristensen.

He then adds, “I expect that in 10 years or so, almost all of the public transportation in our municipality will be by self-driving vehicles. That is also the reason for why we are implementing Olli right now. It is all part of the future – and we can’t just sit around and wait for it, we have to create it, right?”.

This is an example of a self-driving car that is going to affect the work of some social workers in one Danish municipality. But wait. This is not manual labor, is it?

Let’s have a look at another two examples.

In January 2017, a robot reporter produced an article for a Chinese daily. The article was about 300 characters long and it only took it a second to write it. This was due to the robot having a stronger and larger data analysis capacity, making it quicker at writing stories.

In February 2011, a computer called Watson, also build by the tech-company IBM, wiped out its human contesters at the weekly television show, Jeopardy. It was possible for Watson to answer all of the different questions, and even understand how to respond to the questions, because it was crammed with a great database from different reference books.

Who Is Responsible for This Robot?

Based on these three examples it is not only the jobs in manual labor that might be affected by AI. There is a possibility that the self-driving car will take over taxi drivers, truck drivers and all kinds of car-driving jobs. There is a possibility that Watson, and its crammed data base of knowledge, will be used replacing other jobs in other job sectors, like within the sector of medicine or law. Moreover, there is a risk that AI might affect the work of journalists.

If there is a risk that AI will take over all kinds of jobs, this raises another important question: Who will then be responsible for the robots that are going to affect our jobs?

Let’s have a look at one of the products with AI which will soon hit the streets: the self-driving vehicle. Who is responsible for the self-driving vehicle and moreover, are there any potential risks with it?

A lot of regular cars are already full of ANI systems - from the computer that figures out when the anti-lock brakes should kick in when you want to open the door, to another computer concerning the fuel injection systems. The self-driving vehicle, Olli, will contain a lot of ANI systems that allows it to navigate, perceive and react to the world around it.

And Olli is by far not the first or only self-driving vehicle to hit the streets. Google has already proposed to let fully autonomous vehicles roam the streets.

As mentioned before, AI is a computer. All computers have flaws. Even software that has been used for years, whose source code has been viewed by thousands of programmers, will have subtle bugs lurking in it. Security is a process, not a product.

This leads to another essential question: if a self-driving car is essentially a computer - a product of AI - how would you make sure that the no one ever altered its programming? How would you make sure that your own self-driving vehicle would never be hacked by any other human?

And it gets even creepier. Olli contains a lot of ANI systems. But what happens when the ANI systems become ASI? What happens when a self-driving vehicle is no longer in need of human regulations or when it is even no longer capable of being regulated by its human creators?

This could happen for a number of reasons. The first one we already looked at: programming flaws. But the more extreme one, is when AI is smarter and faster than humans.

The Need for Laws

As the interest and investments in the area explodes, we need to be aware of setting its goals and be aware of its limitation.

An obvious limitation is that hardware and software wear out and are superseded, as mentioned before with the self-driving vehicle, Olli. But the big question involves ethics and laws. What are the legal implications? What rules should we be making to reduce the risks?

When it comes to regulations and laws concerning AI, time is an important matter - or maybe we should say “the lack of time”. Because when it comes to AI, the research is happening at a very rapid pace. It is almost impossible to keep track of what is the latest research done on AI and what the next step will be.

This time-dimension is a central problem for several scholars. For example, Ray Kurzweil, an American computer-scientist and futurist, explains in a TedTalk from 2005 that the exponential growth of AI will lead to a technological singularity, a point where machine intelligence will overpower human intelligence. Here he is referring to when ANI systems becomes ASI systems. In other words, when the fiction-based character, Sonny, becomes self-aware.

The Future of Artificial Intelligence

And Kurzweil is not the only one who has expressed his concerns regarding the lack of regulations concerning AI.

Many of the world’s leading thinkers and entrepreneurs are publicly expressing their concerns. Back in 2014, Stephen Hawking told the BBC in an interview, that AI could spell the end of the human race. Similarly, Elon Musk tweeted that AI could potentially be more dangerous than nukes.

Alright, so this was back in 2005 and 2014. What’s happened since then?

The short answer is nothing. There are still no laws or regulations regarding AI and this is despite the fact that many of the world’s AI experts, including Hawking and Musk, recently signed an open letter published by MIT-affiliated The Future of Life Institute which states a set of 24 principles which could regulate artificial intelligence.

The letter focused on the ethical implications including liability and law. For example, if a self-driving vehicle is involved in an accident, who is then liable: the vehicle, the tech-company or you?

Humans and Technology Throughout Time

Alright, so you might ask, as a counter question, whether this debate about new developing technology is a new thing. And the answer to this is, no. Throughout history we have always had this debate about new technology. Casper Andersen, associate professor at the Department of Culture and Society at Aarhus University, says that: “Throughout history we have always seen the development of technology in two ways: One being with great fascination and the second one with great fear”. To illustrate this, he gives two examples - the development of the railroad in the late 19th century and the nuclear technology developed throughout the 1930s.

“So, on the one hand we have this dream about a great future and on the other hand we have the idea of a complete meltdown of our society”, he adds and goes on, “We have always had this debate: Does technology and the development of it make us redundant or does it set us free? This is a classic question we have been asking ourselves throughout time”, says Andersen.

There are several reasons for why we keep on developing new technologies. The is duo to the capitalistic nature of the society we live in. Andersen points out that there is a certain demand pull inflation. In other words, when the consumers have a demand, the market jumps at it. And when there are no laws regulating AI, the power is in Google’s, Amazon’s and all of the other big tech companies’ hands.

Humans or Robots?

We continually have this debate about whether new technological developments will bring good or bad changes. But why is it that the debate about AI seems so different?

“The reason why artificial intelligence is provoking most people is because it challenges the classical way we think of humans. Since ancient time we have always thought and defined humans as the only thinking creature with a reasoning. Artificial intelligence makes us re-think this question: Is the human really a special creature?”, says Anderson.

AI challenges the whole concept of humans as being the only thinking being with some sort of reasoning. In other words, AI makes us think about how we define and think about ourselves.

Anderson goes on, “What is happening right now is a very fundamental discussion about what roles humans will have in not only one country or one society, but in the whole world. I believe artificial intelligence is going to change several aspects regarding our society and one major aspect is changing how we think of jobs”.

The film ends with the him, the robot Sonny, making his way out into the big wide world. Not exactly knowing what the future holds.


bottom of page