ABSTRACT
On one side, the security industry has successfully adopted some AI-based techniques. Use varies from mitigating denial of service attacks, forensics, intrusion detection systems, homeland security, critical infrastructures protection, sensitive information leakage, access control, and malware detection. On the other side, we see the rise of Adversarial AI. Here the core idea is to subvert AI systems for fun and profit. The methods utilized for the production of AI systems are systematically vulnerable to a new class of vulnerabilities. Adversaries are exploiting these vulnerabilities to alter AI system behavior to serve a malicious end goal. This panel discusses some of these aspects.
Index Terms
- AI for Security and Security for AI
Recommendations
From information security to cyber security
The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. ...
Verifying the Security of Enclaved Execution Against Interrupt-based Side-channel Attacks
TIS'19: Proceedings of ACM Workshop on Theory of Implementation Security WorkshopComputing platforms sometimes provide hardware support for enclaves or trusted execution environments. On such platforms, security critical code can execute in an enclave or secure world, isolated from all other software on the platform.
But the past ...
Detecting Insider Theft of Trade Secrets
Trusted insiders who misuse their privileges to gather and steal sensitive information represent a potent threat to businesses. Applying access controls to protect sensitive information can reduce the threat but has significant limitations. Even if ...
Comments