IARPA Working on Ways to Protect AI Training Data From Malicious Tampering

IARPA Working on Ways to Protect AI Training Data From Malicious Tampering

Monday, 22 April 2019 08:34
IARPA Working on Ways to Protect AI Training Data From Malicious Tampering Credit: Federal News Network

IARPA Working on Ways to Protect AI Training Data

From Malicious Tampering

 

The intelligence community’s advanced research agency has laid the groundwork for two programs focused on ways to overcome adversarial machine learning and prevent adversaries from using artificial intelligence tools against users.

 

Stacey Dixon, director of the Intelligence Advanced Research Projects Activity (IARPA), said the agency expects both programs to run for about two years.

 

“We appreciate the fact that AI is going to be in a lot more things in our life, and we’re going to be relying on it a lot more, so we would want to be able to take advantage of, or at least mitigate, those vulnerabilities that we know exist,” Dixon said Tuesday at an Intelligence and National Security Alliance (INSA) conference in Arlington, Virginia.

 

The first project, called Trojans in Artificial Intelligence (TrojAI), looks to sound the alarm whenever an adversary has compromised the training data for a machine-learning algorithm.

 

Read the full article on Federal News Network.