Whether it’s autonomous vehicles or credit-check systems – machines are increasingly making decisions for us all. But what principles do they follow? An interview with Professor Christoph Lütge, director of the newly established Institute for Ethics in Artificial Intelligence.

The Institute for Ethics in Artificial Intelligence at the Technical University of Munich is unique in Europe. Specialists from various disciplines, including legal experts, engineers, and social scientists, have been conducting research on ethical issues relating to intelligent machines at the institute since earlier this year. Having studied business informatics and philosophy, director Christoph Lütge is familiar with both sides of the argument.

 

Professor Lütge, can we put our trust in artificial intelligence?

Let me answer that with a question of my own: is it always better to trust in human decisions? In many proven cases, people have made worse decisions than machines because of prejudices or insufficient information. That’s a great advantage of AI. It makes decisions free of prejudice and can consider large volumes of information.

 

So, you’re not afraid of self-learning machines developing their own ethics?

A Terminator that turns against its programmers? In my opinion, this is still decades away – if it will ever exist. I’m very skeptical. We have to be more afraid of the fact that the systems are actually stupid. Besides, such questions of super-intelligence – whether it’s Terminator or Matrix – distract from the current ethical issues of AI.

 

Which are the areas of research at your institute?

The projects cover a wide range of ethics in artificial intelligence. We are dealing with ethical issues of AI in healthcare and autonomous driving. We are looking into areas such as trust in machine learning and influencing human decisions through “nudging” with AI. Our approach differs from that of other institutes. We work in interdisciplinary pairs: a researcher from the technical side – in other words a computer scientist, engineer or physician – works with a researcher from the fields of ethics, social sciences, or law. Our aim is to mediate between technology and ethics.

 

And is it working?

Naturally, there are differences of opinion. But our researchers have consciously chosen this form of collaboration. It’s important for us to not just produce research papers but come up with practical solutions. For example, ethical guidelines for specific software or a special care robot. To achieve that, we talk to developers from the companies and discuss their specific ethical issues.

 


What does this mean in concrete terms, for example, for autonomous driving?

We don't want to set guidelines from an ivory tower, so to speak. Instead, in collaboration with the programmers of such systems, we want to consider specifics, such as what are the categories for visual recognition. Is it enough to have “person” as a category, or do we need to distinguish between “old person,” “child,” and “person in wheelchair”? And, with what probability must an object be recognized?

 

Once the object has been recognized, automated systems need to make decisions. This is an ethical problem, particularly when lives are at stake. Here’s a classic example: a child runs into the road in front of a car and there’s a pensioner on the sidewalk – what decision should AI make?

There is extensive literature on dilemmas such as this. Most of it is superfluous because it’s purely theoretical. The question is: what is the car doing exactly? First, it will brake as hard as possible. Harder than a person. The next question is: can it swerve to one side? In a theoretical situation the assumption is always that it can’t. Ultimately, this is a very special situation at the end of a chain of realistic considerations.

For this special case, you have to come to sensible decisions. In the Ethics Commission of the German Federal Ministry of Transport, we have stipulated the following for automated driving: this decision should not be made on the basis of the characteristics of the person. In other words, the older person should not be sacrificed to save the younger person. But again, the focus is on being realistic. Can I even clearly recognize that in this situation?

Does the question of blame need to be taken into consideration in this example?

In fact, this has been much discussed. Studies on the subject are clear: decisions that go against someone who behaves incorrectly are the ones that people accept the most. Someone who doesn't stick to the rules has to be treated differently than someone who obeys them. If a person crosses the road at a red light, they are at a higher risk.

 

Who will ultimately make the decision? The programmer? Or legislators and associations?

Ethics is too important to leave to the responsibility of individuals. And here, ethics does not primarily mean personal ethos – as in the “respectable businessman” or the “responsible engineer.” It’s about ethics at a system level, be it in companies or societies. Some ethicists see it as mainly the legislator’s duty. But that’s just one of many paths, and it’s long and hard.

 

You consider companies to be responsible as key players.

Yes, indeed. We can answer questions of ethics and AI only together with companies. Ethics is often abstract, so we want to introduce ethics into practice, into specific technical development. Not to prevent it, but to promote it. And companies also have an intrinsic interest in it – that’s my basic premise. After all, AI comes with risks, and ethics acts as a kind of early warning system.

 

What does that mean from an organizational point of view? Do companies need a position for “Corporate Digital Responsibility” as is demanded by some?

I can’t give a clear answer to that yet. Many companies are experimenting with different forms. But merely creating a position in the company will not be sufficient. Ethics must be incorporated into the core, into the value-added process. We want to accompany this process. Because I’m convinced that one company alone cannot achieve this. We need new forms of collaboration between businesses, science, and civil society. Corporate Digital Responsibility can be helpful as a concept.

 

 

People from differing cultural backgrounds will have different answers to ethical questions. Do we need universal ethics?

For our subject of ethics and AI, we have attempted to contrast and compare the ethical principles that have recently been presented by companies, commissions, and associations. What we have found is that the differences aren’t that great, even if you compare Chinese and European principles. Considerations about security or data protection, for example, aren’t fundamentally different from one another.

 

When it comes to data protection, there are considerable reservations in Europe in particular compared to the United States and Asia.

There is a great deal of mutual learning taking place. China and the US are learning from Europe. But Europe, and especially Germany, also have to learn that they won’t get anywhere with rigid data protection.

We can also learn from each other when it comes to the willingness to take risks. Especially in safety-critical systems, the European approach is often to exclude all eventualities and set legal guidelines. Only then do we introduce a technology into practical use. That creates trust, and “trustworthy AI” could well be developed further as a European approach. Yet, ethics also means not systematically obstructing innovation, because technology can also prevent accidents and save lives.

 

So do we need more information about AI?

The central question is: will people accept these systems? They will only succeed if we increase confidence in them. Ethics plays an important role here because knowing about the ethical component boosts confidence in AI.

 

How can confidence in AI be increased specifically?

For example, by artificial intelligence taking into account the differing preferences of people, based on such things as their age, gender, or size. This is because a basic ethical problem about self-learning systems is: how are they fed? If my test group is made up only of tall, male engineers, this can lead to systematic bias.

However, one advantage of AI systems is that they can take into account differing preferences. As a result, technologies will be more specific in the future. This requires developers to rethink. The old engineering approach of “one system for all” is no longer practical.