Security weaknesses in Machine Learning

Recordings

https://www.youtube.com/watch?v=EaPORv6_oCc

View Recording

Slides

/files/01-09_Security-Weakness-is-Machine-Learning.pdf

View Slides

Abstract

Machine learning has emerged to a very promising technology in the last few years. It is integrated into more and more products and it’s likely that this trend will continue or even accelerate in the next few years. Machine learning is increasingly integrated even in security relevant and life-critical applications. Even though we have seen huge progress on that area recently we still don’t know very much about the security of these systems. Research is still in its infancy. This talk shows which attack vectors exist and how machine learning methods can be fooled and manipulated.

Outline

The goal of this talk is to show that like software systems also machine learning systems could contain security flaws. These flaws could result in risks which people should be aware of when using machine learning in critical systems. In this talk examples are used to show different types of attacks. It is shown how spam filters and image classifiers can be fooled (latter via adversarial examples), that backdoors can be integrated into models (poisoning attacks), that sensitive data can be stolen from models and how simple anomaly detection systems can be bypassed.

Daniel Etzold

@etzoldio

Daniel Etzold is IT Security Architect at 1&1 Mail & Media Development & Technology GmbH and is responsible for the Secure Software Development Lifecycle. He creates threat models, performs reviews and penetration tests and advises developers, product managers and executives. He also analyses the security risks which arise due to the use of machine learning and how machine learning can be used to increase the security of software systems.