Artificial intelligence - When do machines take over?
Everybody knows them. Smartphones that talk to us, wristwatches that record our health data, workflows that organize themselves automatically, cars, airplanes and drones that control themselves, traffic and energy systems with autonomous logistics or robots that explore distant planets are technical examples of a networked world of intelligent systems. Machine learning is dramatically changing our civilization. We rely more and more on efficient algorithms, because otherwise we will not be able to cope with the complexity of our civilizing infrastructure. But how secure are AI algorithms? This challenge is taken up in the 2nd edition: Complex neural networks are fed and trained with huge amounts of data (big data). The number of necessary parameters explodes exponentially. Nobody knows exactly what is going on in these "black boxes". In machine learning we need more explainability and accountability of causes and effects in order to be able to decide ethical and legal questions of responsibility (e.g. in autonomous driving or medicine)! Besides causal learning, we also analyze procedures of tests and verification to get certified AI-programs. Since its inception, AI research has been associated with great visions of the future of mankind. It is already a key technology that will decide the global competition of social systems. "Artificial Intelligence and Responsibility" is another central supplement to the 2nd edition: How should we secure our individual liberty rights in the AI world? This book is a plea for technology design: AI must prove itself as a service in society.