• Image by Fotografia cnj on flickr

Artificial intelligence is on the rise. In everyday life, we are increasingly encountering machines that function autonomously. But there are challenges to be faced by science. And these cannot always be answered with mere logic.

What Is Artificial Morality?

Machine ethics is a new field of research at the interface of computer science and philosophy, aiming at the development of moral machines. It is about designing machines on the basis of computer technology that can make and implement moral decisions themselves. If methods of artificial intelligence are to be used within the framework of machine ethics, one speaks of “artificial morality” analogously to “artificial intelligence”.

While artificial intelligence aims to model or simulate the cognitive abilities of humans, artificial morality aims to equip artificial systems with the ability to make moral decisions and actions. The more complex and autonomous artificial systems become, the more likely they must be able to regulate their behavior within a certain framework. This means that they also find themselves in situations that demand moral decisions.

Autonomous driving is a much-discussed area of application for morally acting machines. Even fully automated vehicles face moral decisions and have to weigh moral values. It is important to program these values in order to ensure that in unavoidable dangerous situations the protection of human life takes precedence over damage to property and animals. But animals should also be spared wherever possible. A particular difficulty is the moral dilemma that may arise, where, a decision must be made as to whether a small number of lives may be put at risk in order to save a larger number if this is unavoidable.

The Moral Machine

The renowned Massachusetts Institute of Technology (MIT) in Boston addressed precisely this question and created the so-called Moral Machine. This platform, in which more than 2.5 million people worldwide have now taken part, records how people view moral decisions made by intelligent machines, such as self-driving cars. What moral expectations do people have of the behavior from robots?

The Moral Machine confronts users with accident scenarios in which the car gets into hopeless situations and asks them to make a decision. No matter how the car behaves, at least one person will always die.

The result in all countries was clear on three main points:
1. The car should behave in such a way that it preserves the life of a human being rather than the life of an animal.
2. The life of several people is more important than that of one.
3. Children and adolescents are especially worth protecting.

I also took the test and I am very happy not to have to make these decisions in real life.

Observe and develop

We can equip intelligent machines, such as driving cars, in such a way that they can cope perfectly with all common situations in road traffic, but it will hardly be possible to program them foresightedly for all possible scenarios. And unlike humans, they will not be able to react reflexively. We need instances that systematically observe whether artificial intelligence actually does what it is supposed to do. We should investigate the behavior of robots and other machines controlled by algorithms just as systematically as that of animals and humans for that matter.

The clever minds of our and future generations will certainly not get tired of further developing artificial intelligence and lifting it into spheres still undreamed of today. Let’s go – into a wonderful, morally correct future for whatever that means …