Future of AI and Security #emTechDigital

Dawn Song

Professor, UC Berkeley; Founder and CEO, Oasis Labs
Future of AI and Security



Dawn is talking about AI-powered DDOS cyber attacks.

Machine Learning in the Presence of the Attacker

Cyber attacks upon AI. It can attack the integrity and confidentiality of the AI system.


Consider self-driving car AI



Real-world stop signs are dirty or have broken paint.
However, people can purposely alter signs to confuse the AI image recognition. They can change or add letters.



People can also spray paint new road lines.



Tests show that AI can be fooled to misread altered stop signs or altered speed limits.

Adversarial Machine Learning

This is learning in the presence of adversaries.

Do Neural Networks remember training data? Can the training data be hacked? Yes the training data can be hacked. Such as extracting social security numbers from a certain training data. Differential privacy can be used to protect sensitive inputs.

Here is a previous recent talk by Dawn Song and her co-founders at Oasis Labs.

SOURCE- Live reporting from EMTech Digital 2019

Written by Brian Wang. <a href=”https://www.nextbigfuture.com”>Nextbigfuture.com</A>