Professor, UC Berkeley; Founder and CEO, Oasis Labs
Future of AI and Security
Dawn is talking about AI-powered DDOS cyber attacks.
Machine Learning in the Presence of the Attacker
Cyber attacks upon AI. It can attack the integrity and confidentiality of the AI system.
Consider self-driving car AI
Real-world stop signs are dirty or have broken paint.
However, people can purposely alter signs to confuse the AI image recognition. They can change or add letters.
People can also spray paint new road lines.
Tests show that AI can be fooled to misread altered stop signs or altered speed limits.
Adversarial Machine Learning
This is learning in the presence of adversaries.
Do Neural Networks remember training data? Can the training data be hacked? Yes the training data can be hacked. Such as extracting social security numbers from a certain training data. Differential privacy can be used to protect sensitive inputs.
Here is a previous recent talk by Dawn Song and her co-founders at Oasis Labs.
SOURCE- Live reporting from EMTech Digital 2019
Written by Brian Wang. <a href=”https://www.nextbigfuture.com”>Nextbigfuture.com</A>
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.